IASSMR.kernel.fit {fsemipar}R Documentation

Impact point selection with IASSMR and kernel estimation

Description

This function implements the Improved Algorithm for Sparse Semiparametric Multi-functional Regression (IASSMR) with kernel estimation. This algorithm is specifically designed for estimating multi-functional partial linear single-index models, which incorporate multiple scalar variables and a functional covariate as predictors. These scalar variables are derived from the discretisation of a curve and have linear effects while the functional covariate exhibits a single-index effect.

IASSMR is a two-stage procedure that selects the impact points of the discretised curve and estimates the model. The algorithm employs a penalised least-squares regularisation procedure, integrated with kernel estimation using Nadaraya-Watson weights. It uses B-spline expansions to represent curves and eligible functional indexes. Additionally, it utilises an objective criterion (criterion) to determine the initial number of covariates in the reduced model (w.opt), the bandwidth (h.opt), and the penalisation parameter (lambda.opt).

Usage

IASSMR.kernel.fit(x, z, y, train.1 = NULL, train.2 = NULL, 
seed.coeff = c(-1, 0, 1), order.Bspline = 3, nknot.theta = 3, 
min.q.h = 0.05, max.q.h = 0.5, h.seq = NULL, num.h = 10, range.grid = NULL, 
kind.of.kernel = "quad", nknot = NULL, lambda.min = NULL, lambda.min.h = NULL, 
lambda.min.l = NULL, factor.pn = 1, nlambda = 100, vn = ncol(z), nfolds = 10, 
seed = 123, wn = c(10, 15, 20), criterion = "GCV", penalty = "grSCAD", 
max.iter = 1000, n.core = NULL)

Arguments

x

Matrix containing the observations of the functional covariate (functional single-index component), collected by row .

z

Matrix containing the observations of the functional covariate that is discretised (linear component), collected by row.

y

Vector containing the scalar response.

train.1

Positions of the data that are used as the training sample in the 1st step. The default setting is train.1<-1:ceiling(n/2).

train.2

Positions of the data that are used as the training sample in the 2nd step. The default setting is train.2<-(ceiling(n/2)+1):n.

seed.coeff

Vector of initial values used to build the set \Theta_n (see section Details). The coefficients for the B-spline representation of each eligible functional index \theta \in \Theta_n are obtained from seed.coeff. The default is c(-1,0,1).

order.Bspline

Positive integer giving the order of the B-spline basis functions. This is the number of coefficients in each piecewise polynomial segment. The default is 3.

nknot.theta

Positive integer indicating the number of regularly spaced interior knots in the B-spline expansion of \theta_0. The default is 3.

min.q.h

Minimum quantile order of the distances between curves, which are computed using the projection semi-metric. This value determines the lower endpoint of the range from which the bandwidth is selected. The default is 0.05.

max.q.h

Maximum quantile order of the distances between curves, which are computed using the projection semi-metric. This value determines the upper endpoint of the range from which the bandwidth is selected. The default is 0.5.

h.seq

Vector containing the sequence of bandwidths. The default is a sequence of num.h equispaced bandwidths in the range constructed using min.q.h and max.q.h.

num.h

Positive integer indicating the number of bandwidths in the grid. The default is 10.

range.grid

Vector of length 2 containing the endpoints of the grid at which the observations of the functional covariate x are evaluated (i.e. the range of the discretisation). If range.grid=NULL, then range.grid=c(1,p) is considered, where p is the discretisation size of x (i.e. ncol(x)).

kind.of.kernel

The type of kernel function used. Currently, only Epanechnikov kernel ("quad") is available.

nknot

Positive integer indicating the number of interior knots for the B-spline expansion of the functional covariate. The default value is (p - order.Bspline - 1)%/%2.

lambda.min

The smallest value for lambda (i. e., the lower endpoint of the sequence in which lambda.opt is selected), as fraction of lambda.max. The defaults is lambda.min.l if the sample size is larger than factor.pn times the number of linear covariates and lambda.min.h otherwise.

lambda.min.h

The lower endpoint of the sequence in which lambda.opt is selected if the sample size is smaller than factor.pn times the number of linear covariates. The default is 0.05.

lambda.min.l

The lower endpoint of the sequence in which lambda.opt is selected if the sample size is larger than factor.pn times the number of linear covariates. The default is 0.0001.

factor.pn

Positive integer used to set lambda.min. The default value is 1.

nlambda

Positive integer indicating the number of values in the sequence from which lambda.opt is selected. The default is 100.

vn

Positive integer or vector of positive integers indicating the number of groups of consecutive variables to be penalised together. The default value is vn=ncol(z), resulting in the individual penalization of each scalar covariate.

nfolds

Number of cross-validation folds (used when criterion="k-fold-CV"). Default is 10.

seed

You may set the seed for the random number generator to ensure reproducible results (applicable when criterion="k-fold-CV" is used). The default seed value is 123.

wn

A vector of positive integers indicating the eligible number of covariates in the reduced model. For more information, refer to the section Details. The default is c(10,15,20).

criterion

The criterion used to select the tuning and regularisation parameters: wn.opt, lambda.opt and h.opt (also vn.opt if needed). Options include "GCV", "BIC", "AIC", or "k-fold-CV". The default setting is "GCV".

penalty

The penalty function applied in the penalised least-squares procedure. Currently, only "grLasso" and "grSCAD" are implemented. The default is "grSCAD".

max.iter

Maximum number of iterations allowed across the entire path. The default value is 1000.

n.core

Number of CPU cores designated for parallel execution. The default is n.core<-availableCores(omit=1).

Details

The multi-functional partial linear single-index model (MFPLSIM) is given by the expression

Y_i=\sum_{j=1}^{p_n}\beta_{0j}\zeta_i(t_j)+r\left(\left<\theta_0,X_i\right>\right)+\varepsilon_i,\ \ \ (i=1,\dots,n),

where:

In the MFPLSIM, it is assumed that only a few scalar variables from the set \{\zeta(t_1),\dots,\zeta(t_{p_n})\} are part of the model. Therefore, the relevant variables in the linear component (the impact points of the curve \zeta on the response) must be selected, and the model estimated.

In this function, the MFPLSIM is fitted using the IASSMR. The IASSMR is a two-step procedure. For this, we divide the sample into two independent subsamples, each asymptotically half the size of the original sample (n_1\sim n_2\sim n/2). One subsample is used in the first stage of the method, and the other in the second stage.The subsamples are defined as follows:

\mathcal{E}^{\mathbf{1}}=\{(\zeta_i,\mathcal{X}_i,Y_i),\quad i=1,\dots,n_1\},

\mathcal{E}^{\mathbf{2}}=\{(\zeta_i,\mathcal{X}_i,Y_i),\quad i=n_1+1,\dots,n_1+n_2=n\}.

Note that these two subsamples are specified to the program through the arguments train.1 and train.2. The superscript \mathbf{s}, where \mathbf{s}=\mathbf{1},\mathbf{2}, indicates the stage of the method in which the sample, function, variable, or parameter is involved.

To explain the algorithm, we assume that the number p_n of linear covariates can be expressed as follows: p_n=q_nw_n, with q_n and w_n being integers.

  1. First step. The FASSMR (see FASSMR.kernel.fit) combined with kernel estimation is applied using only the subsample \mathcal{E}^{\mathbf{1}}. Specifically:

    • Consider a subset of the initial p_n linear covariates, which contains only w_n equally spaced discretized observations of \zeta covering the interval [a,b]. This subset is the following:

      \mathcal{R}_n^{\mathbf{1}}=\left\{\zeta\left(t_k^{\mathbf{1}}\right),\ \ k=1,\dots,w_n\right\},

      where t_k^{\mathbf{1}}=t_{\left[(2k-1)q_n/2\right]} and \left[z\right] denotes the smallest integer not less than the real number z.The size (cardinality) of this subset is provided to the program in the argument wn (which contains a sequence of eligible sizes).

    • Consider the following reduced model, which involves only the w_n linear covariates belonging to \mathcal{R}_n^{\mathbf{1}}:

      Y_i=\sum_{k=1}^{w_n}\beta_{0k}^{\mathbf{1}}\zeta_i(t_k^{\mathbf{1}})+r^{\mathbf{1}}\left(\left<\theta_0^{\mathbf{1}},X_i\right>\right)+\varepsilon_i^{\mathbf{1}}.

      The penalised least-squares variable selection procedure, with kernel estimation, is applied to the reduced model. This is done using the function sfplsim.kernel.fit, which requires the remaining arguments (see sfplsim.kernel.fit). The estimates obtained after that are the outputs of the first step of the algorithm.

  2. Second step. The variables selected in the first step, along with those in their neighborhood, are included. The penalised least-squares procedure, combined with kernel estimation, is carried out again considering only the subsample \mathcal{E}^{\mathbf{2}}. Specifically:

    • Consider a new set of variables:

      \mathcal{R}_n^{\mathbf{2}}=\bigcup_{\left\{k,\widehat{\beta}_{0k}^{\mathbf{1}}\not=0\right\}}\left\{\zeta(t_{(k-1)q_n+1}),\dots,\zeta(t_{kq_n})\right\}.

      Denoting by r_n=\sharp(\mathcal{R}_n^{\mathbf{2}}), the variables in \mathcal{R}_n^{\mathbf{2}} can be renamed as follows:

      \mathcal{R}_n^{\mathbf{2}}=\left\{\zeta(t_1^{\mathbf{2}}),\dots,\zeta(t_{r_n}^{\mathbf{2}})\right\},

    • Consider the following model, which involves only the linear covariates belonging to \mathcal{R}_n^{\mathbf{2}}

      Y_i=\sum_{k=1}^{r_n}\beta_{0k}^{\mathbf{2}}\zeta_i(t_k^{\mathbf{2}})+r^{\mathbf{2}}\left(\left<\theta_0^{\mathbf{2}},X_i\right>\right)+\varepsilon_i^{\mathbf{2}}.

      The penalised least-squares variable selection procedure, with kernel estimation, is applied to this model using the function sfplsim.kernel.fit.

The outputs of the second step are the estimates of the MFPLSIM. For further details on this algorithm, see Novo et al. (2021).

Remark: If the condition p_n=w_n q_n is not met (then p_n/w_n is not an integer), the function considers variable q_n=q_{n,k} values k=1,\dots,w_n. Specifically:

q_{n,k}= \left\{\begin{array}{ll} [p_n/w_n]+1 & k\in\{1,\dots,p_n-w_n[p_n/w_n]\},\\ {[p_n/w_n]} & k\in\{p_n-w_n[p_n/w_n]+1,\dots,w_n\}, \end{array} \right.

where [z] denotes the integer part of the real number z.

The function supports parallel computation. To avoid it, we can set n.core=1.

Value

call

The matched call.

fitted.values

Estimated scalar response.

residuals

Differences between y and the fitted.values.

beta.est

\hat{\mathbf{\beta}} (i.e. estimate of \mathbf{\beta}_0 when the optimal tuning parameters w.opt, lambda.opt, h.opt and vn.opt are used).

theta.est

Coefficients of \hat{\theta} in the B-spline basis (i.e. estimate of \theta_0when the optimal tuning parameters w.opt, lambda.opt, h.opt and vn.opt are used): a vector of length(order.Bspline+nknot.theta).

indexes.beta.nonnull

Indexes of the non-zero \hat{\beta_{j}}.

h.opt

Selected bandwidth (when w.opt is considered).

w.opt

Selected size for \mathcal{R}_n^{\mathbf{1}}.

lambda.opt

Selected value of the penalisation parameter \lambda (when w.opt is considered).

IC

Value of the criterion function considered to select w.opt, lambda.opt, h.opt and vn.opt.

vn.opt

Selected value of vn in the second step (when w.opt is considered).

beta2

Estimate of \mathbf{\beta}_0^{\mathbf{2}} for each value of the sequence wn.

theta2

Estimate of \theta_0^{\mathbf{2}} for each value of the sequence wn (i.e. its coefficients in the B-spline basis).

indexes.beta.nonnull2

Indexes of the non-zero linear coefficients after the step 2 of the method for each value of the sequence wn.

h2

Selected bandwidth in the second step of the algorithm for each value of the sequence wn.

IC2

Optimal value of the criterion function in the second step for each value of the sequence wn.

lambda2

Selected value of penalisation parameter in the second step for each value of the sequence wn.

index02

Indexes of the covariates (in the entire set of p_n) used to build \mathcal{R}_n^{\mathbf{2}} for each value of the sequence wn.

beta1

Estimate of \mathbf{\beta}_0^{\mathbf{1}} for each value of the sequence wn.

theta1

Estimate of \theta_0^{\mathbf{1}} for each value of the sequence wn (i.e. its coefficients in the B-spline basis).

h1

Selected bandwidth in the first step of the algorithm for each value of the sequence wn.

IC1

Optimal value of the criterion function in the first step for each value of the sequence wn.

lambda1

Selected value of penalisation parameter in the first step for each value of the sequence wn.

index01

Indexes of the covariates (in the whole set of p_n) used to build \mathcal{R}_n^{\mathbf{1}} for each value of the sequence wn.

index1

Indexes of the non-zero linear coefficients after the step 1 of the method for each value of the sequence wn.

...

Further outputs to apply S3 methods.

Author(s)

German Aneiros Perez german.aneiros@udc.es

Silvia Novo Diaz snovo@est-econ.uc3m.es

References

Novo, S., Vieu, P., and Aneiros, G., (2021) Fast and efficient algorithms for sparse semiparametric bi-functional regression. Australian and New Zealand Journal of Statistics, 63, 606–638, doi:10.1111/anzs.12355.

See Also

See also sfplsim.kernel.fit, predict.IASSMR.kernel, plot.IASSMR.kernel and FASSMR.kernel.fit.

Alternative methods IASSMR.kNN.fit, FASSMR.kernel.fit and FASSMR.kNN.fit.

Examples


data(Sugar)

y<-Sugar$ash
x<-Sugar$wave.290
z<-Sugar$wave.240

#Outliers
index.y.25 <- y > 25
index.atip <- index.y.25
(1:268)[index.atip]

#Dataset to model
x.sug <- x[!index.atip,]
z.sug<- z[!index.atip,]
y.sug <- y[!index.atip]

train<-1:216

ptm=proc.time()
fit<- IASSMR.kernel.fit(x=x.sug[train,],z=z.sug[train,], y=y.sug[train],
        train.1=1:108,train.2=109:216,nknot.theta=2,lambda.min.h=0.03,
        lambda.min.l=0.03,  max.q.h=0.35, nknot=20,
        criterion="BIC", max.iter=5000)
proc.time()-ptm

fit 
names(fit)


[Package fsemipar version 1.1.1 Index]