plot.AccurateGLM {aglm}  R Documentation 
Plot contribution of each variable and residuals
## S3 method for class 'AccurateGLM' plot( x, vars = NULL, verbose = TRUE, s = NULL, resid = FALSE, smooth_resid = TRUE, smooth_resid_fun = NULL, ask = TRUE, layout = c(2, 2), only_plot = FALSE, main = "", add_rug = FALSE, ... )
x 
A model object obtained from 
vars 
Used to specify variables to be plotted (

verbose 
Set to 
s 
A numeric value specifying λ at which plotting is required.
Note that plotting for multiple λ's are not allowed and 
resid 
Used to display residuals in plots. This parameter may have one of the following classes:

smooth_resid 
Used to display smoothing lines of residuals for quantitative variables. This parameter may have one of the following classes:

smooth_resid_fun 
Set if users need custom smoothing functions. 
ask 
By default, 
layout 
Plotting multiple variables for each page is allowed. To achieve this, set it to a pair of integer, which indicating number of rows and columns, respectively. 
only_plot 
Set to 
main 
Used to specify the title of plotting. 
add_rug 
Set to 
... 
Other arguments are currently not used and just discarded. 
No return value, called for side effects.
Kenji Kondo,
Kazuhisa Takahashi and Hikari Banno (worked on LVariable related features)
Suguru Fujita, Toyoto Tanaka, Kenji Kondo and Hirokazu Iwasawa. (2020)
AGLM: A Hybrid Modeling Method of GLM and Data Science Techniques,
https://www.institutdesactuaires.com/global/gene/link.php?doc_id=16273&fg=1
Actuarial Colloquium Paris 2020
#################### using plot() and predict() #################### library(MASS) # For Boston library(aglm) ## Read data xy < Boston # xy is a data.frame to be processed. colnames(xy)[ncol(xy)] < "y" # Let medv be the objective variable, y. ## Split data into train and test n < nrow(xy) # Sample size. set.seed(2018) # For reproducibility. test.id < sample(n, round(n/4)) # ID numbders for test data. test < xy[test.id,] # test is the data.frame for testing. train < xy[test.id,] # train is the data.frame for training. x < train[ncol(xy)] y < train$y newx < test[ncol(xy)] y_true < test$y ## With the result of aglm() model < aglm(x, y) lambda < 0.1 plot(model, s=lambda, resid=TRUE, add_rug=TRUE, verbose=FALSE, layout=c(3, 3)) y_pred < predict(model, newx=newx, s=lambda) plot(y_true, y_pred) ## With the result of cv.aglm() model < cv.aglm(x, y) lambda < model@lambda.min plot(model, s=lambda, resid=TRUE, add_rug=TRUE, verbose=FALSE, layout=c(3, 3)) y_pred < predict(model, newx=newx, s=lambda) plot(y_true, y_pred)