predict.mfplm.PVS {fsemipar} | R Documentation |
Prediction for MFPLM
Description
predict
method for the multi-functional partial linear model (MFPLM) fitted using PVS.kernel.fit
or PVS.kNN.fit
.
Usage
## S3 method for class 'PVS.kernel'
predict(object, newdata.x = NULL, newdata.z = NULL,
y.test = NULL, option = NULL, ...)
## S3 method for class 'PVS.kNN'
predict(object, newdata.x = NULL, newdata.z = NULL,
y.test = NULL, option = NULL, knearest.n = object$knearest,
min.knn.n = object$min.knn, max.knn.n = object$max.knn.n,
step.n = object$step, ...)
Arguments
object |
Output of the functions mentioned in the |
newdata.x |
A matrix containing new observations of the functional covariate in the functional nonparametric component, collected by row. |
newdata.z |
Matrix containing the new observations of the scalar covariates derived from the discretisation of a curve, collected by row. |
y.test |
(optional) A vector containing the new observations of the response. |
option |
Allows the selection among the choices 1, 2 and 3 for |
... |
Further arguments. |
knearest.n |
Only used for objects |
min.knn.n |
Only used for objects |
max.knn.n |
Only used for objects |
step.n |
Only used for objects |
Details
To obtain the predictions of the response for newdata.x
and newdata.z
, the following options are provided:
If
option=1
, we maintain all the estimates (k.opt
orh.opt
andbeta.est
) to predict the functional nonparametric component of the model. As we use the estimates of the second step of the algorithm, only thetrain.2
is used as training sample to predict. Then, it should be noted thatk.opt
orh.opt
may not be suitable to predict the functional nonparametric component of the model.If
option=2
, we maintainbeta.est
, while the tuning parameter (h
ork
) is selected again to predict the functional nonparametric component of the model. This selection is performed using the leave-one-out cross-validation (LOOCV) criterion in the associated functional nonparametric model and the complete training sample (i.e.train=c(train.1,train.2)
), obtaining a global selection forh
ork
. As we use the entire training sample (not just a subsample of it), the sample size is modified and, as a consequence, the parametersknearest
,min.knn
,max.knn
, andstep
given to the functionIASSMR.kNN.fit
may need to be provided again to compute predictions. For that, we add the argumentsknearest.n
,min.knn.n
,max.knn.n
andstep.mn
.If
option=3
, we maintain only the indexes of the relevant variables selected by the IASSMR. We estimate again the linear coefficients usingsfpl.kernel.fit
orsfpl.kNN.fit
, respectively, without penalisation (settinglambda.seq=0
) and using the entire training sample (train=c(train.1,train.2)
). The method provides two predictions (and MSEPs):a) The prediction associated with
option=1
forsfpl.kernel
orsfpl.kNN
class.b) The prediction associated with
option=2
forsfpl.kernel
orsfpl.kNN
class.
(see the documentation of the functions
predict.sfpl.kernel
andpredict.sfpl.kNN
)If
option=4
(an option only available for the classPVS.kNN
) we maintainbeta.est
, while the tuning parameterk
is selected again to predict the functional nonparametric component of the model. This selection is performed using LOOCV criterion in the functional nonparametric model associated and the complete training sample (i.e.train=c(train.1,train.2)
), obtaining a local selection fork
.
Value
The function returns the predicted values of the response (y
) for newdata.x
and newdata.z
. If !is.null(y.test)
, it also provides the mean squared error of prediction (MSEP
) computed as mean((y-y.test)^2)
.
If option=3
, two sets of predictions (and two MSEPs) are provided, corresponding to the items a) and b) mentioned in the section Details.
If is.null(newdata.x)
or is.null(newdata.z)
, then the function returns the fitted values.
Author(s)
German Aneiros Perez german.aneiros@udc.es
Silvia Novo Diaz snovo@est-econ.uc3m.es
See Also
PVS.kernel.fit
, sfpl.kernel.fit
and predict.sfpl.kernel
or PVS.kNN.fit
,
sfpl.kNN.fit
and predict.sfpl.kNN
.
Examples
data(Sugar)
y<-Sugar$ash
x<-Sugar$wave.290
z<-Sugar$wave.240
#Outliers
index.y.25 <- y > 25
index.atip <- index.y.25
(1:268)[index.atip]
#Dataset to model
x.sug <- x[!index.atip,]
z.sug<- z[!index.atip,]
y.sug <- y[!index.atip]
train<-1:216
test<-217:266
#Fit
fit.kernel<- PVS.kernel.fit(x=x.sug[train,],z=z.sug[train,],
y=y.sug[train],train.1=1:108,train.2=109:216,
lambda.min.h=0.03,lambda.min.l=0.03,
max.q.h=0.35, nknot=20,criterion="BIC",
max.iter=5000)
fit.kNN<- PVS.kNN.fit(x=x.sug[train,],z=z.sug[train,], y=y.sug[train],
train.1=1:108,train.2=109:216,lambda.min.h=0.07,
lambda.min.l=0.07, nknot=20,criterion="BIC",
max.iter=5000)
#Preditions
predict(fit.kernel,newdata.x=x.sug[test,],newdata.z=z.sug[test,],y.test=y.sug[test],option=2)
predict(fit.kNN,newdata.x=x.sug[test,],newdata.z=z.sug[test,],y.test=y.sug[test],option=2)