PenalisedRegression {sharp} | R Documentation |
Penalised regression
Description
Runs penalised regression using implementation from
glmnet
. This function is not using stability.
Usage
PenalisedRegression(
xdata,
ydata,
Lambda = NULL,
family,
penalisation = c("classic", "randomised", "adaptive"),
gamma = NULL,
...
)
Arguments
xdata |
matrix of predictors with observations as rows and variables as columns. |
ydata |
optional vector or matrix of outcome(s). If |
Lambda |
matrix of parameters controlling the level of sparsity. |
family |
type of regression model. This argument is defined as in
|
penalisation |
type of penalisation to use. If
|
gamma |
parameter for randomised or adaptive regularisation. Default is
|
... |
additional parameters passed to |
Value
A list with:
selected |
matrix of binary selection status. Rows correspond to different model parameters. Columns correspond to predictors. |
beta_full |
array of model coefficients. Rows correspond to different model parameters. Columns correspond to predictors. Indices along the third dimension correspond to outcome variable(s). |
References
Zou H (2006). “The adaptive lasso and its oracle properties.” Journal of the American statistical association, 101(476), 1418–1429.
Tibshirani R (1996). “Regression Shrinkage and Selection via the Lasso.” Journal of the Royal Statistical Society. Series B (Methodological), 58(1), 267–288. ISSN 00359246, http://www.jstor.org/stable/2346178.
See Also
SelectionAlgo
, VariableSelection
Other underlying algorithm functions:
CART()
,
ClusteringAlgo()
,
PenalisedGraphical()
,
PenalisedOpenMx()
Examples
# Data simulation
set.seed(1)
simul <- SimulateRegression(pk = 50)
# Running the LASSO
mylasso <- PenalisedRegression(
xdata = simul$xdata, ydata = simul$ydata,
Lambda = c(0.1, 0.2), family = "gaussian"
)
# Using glmnet arguments
mylasso <- PenalisedRegression(
xdata = simul$xdata, ydata = simul$ydata,
Lambda = c(0.1), family = "gaussian",
penalty.factor = c(rep(0, 10), rep(1, 40))
)
mylasso$beta_full