ADMMnet {ADMMnet} | R Documentation |

Fit a linear or cox model regularized with net (L1 and Laplacian), elastic-net (L1 and L2) or lasso (L1) penalty, and their adaptive forms, such as adaptive lasso and net adjusting for signs of linked coefficients. In addition, it treats the number of non-zero coefficients as another tuning parameter and simultaneously selects with the regularization parameter `lambda`

. The package uses one-step coordinate descent algorithm and runs extremely fast by taking into account the sparsity structure of coefficients.

ADMMnet(x, y, family = c("gaussian", "cox"), penalty = c("Lasso", "Enet", "Net"), Omega = NULL, alpha = 1.0, lambda = NULL, nlambda = 50, rlambda = NULL, nfolds = 1, foldid = NULL, inzero = TRUE, adaptive = c(FALSE, TRUE), aini = NULL, isd = FALSE, keep.beta = FALSE, ifast = TRUE, thresh = 1e-07, maxit = 1e+05)

`x` |
input matrix. Each row is an observation vector. |

`y` |
response variable. For |

`family` |
type of outcome. Can be "gaussian" or "cox". |

`penalty` |
penalty type. Can choose
where |

`Omega` |
correlation/adjacency matrix with zero diagonal, used for |

`alpha` |
ratio between L1 and Laplacian for |

`lambda` |
a user supplied decreasing sequence. If |

`nlambda` |
number of |

`rlambda` |
fraction of |

`nfolds` |
number of folds. With |

`foldid` |
an optional vector of values between 1 and |

`inzero` |
logical flag for simultaneously selecting the number of non-zero coefficients with |

`adaptive` |
logical flags for adaptive version. Default is |

`aini` |
a user supplied initial estimate of |

`isd` |
logical flag for outputing standardized coefficients. |

`keep.beta` |
logical flag for returning estimates for all |

`ifast` |
logical flag for efficient calculation of risk set updates for |

`thresh` |
convergence threshold for coordinate descent. Default value is |

`maxit` |
Maximum number of iterations for coordinate descent. Default is |

One-step coordinate descent algorithm is applied for each `lambda`

. For `family = "cox"`

, `ifast = TRUE`

adopts an efficient way to update risk set and sometimes the algorithm ends before all `nlambda`

values of `lambda`

have been evaluated. To evaluate small values of `lambda`

, use `ifast = FALSE`

. The two methods only affect the efficiency of algorithm, not the estimates.

`x`

is always standardized prior to fitting the model and the estimate is returned on the original scale. For `family = "gaussian"`

, y is centered by removing its mean, so there is no intercept output.

Cross-validation is used for tuning parameters. For `inzero = TRUE`

, we further select the number of non-zero coefficients obtained from regularized model at each `lambda`

. This is motivated by formulating L0 variable selection in ADMM form, which shows significant improvement over the commonly used regularized methods without this technique.

An object with S3 class `"ADMMnet"`

.

`Beta` |
a sparse Matrix of coefficients, stored in class "dgCMatrix". |

`Beta0` |
coefficients after additionally tuning the number of non-zeros, for |

`fit` |
a data.frame containing |

`fit0` |
a data.frame containing |

`lambda.min` |
value of |

`lambda.opt` |
value of |

`penalty` |
penalty type. |

`adaptive` |
logical flags for adaptive version (see above). |

`flag` |
convergence flag (for internal debugging). |

It may terminate and return `NULL`

.

Xiang Li, Shanghong Xie, Donglin Zeng and Yuanjia Wang

Maintainer: Xiang Li <xl2473@cumc.columbia.edu>, Shanghong Xie <sx2168@cumc.columbia.edu>

Boyd, S., Parikh, N., Chu, E., Peleato, B., & Eckstein, J. (2011).
*Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1), 1-122.*

http://dl.acm.org/citation.cfm?id=2185816

Friedman, J., Hastie, T. and Tibshirani, R. (2010).
*Regularization paths for generalized linear models via coordinate descent, Journal of Statistical Software, Vol. 33(1), 1.*

http://www.jstatsoft.org/v33/i01/

Li, C., and Li, H. (2010).
*Variable selection and regression analysis for graph-structured covariates with an application to genomics. The annals of applied statistics, 4(3), 1498.*

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3423227/

Sun, H., Lin, W., Feng, R., and Li, H. (2014)
*Network-regularized high-dimensional cox regression for analysis of genomic data, Statistica Sinica.*

http://www3.stat.sinica.edu.tw/statistica/j24n3/j24n319/j24n319.html

### Linear model ### set.seed(1213) N=100;p=30;p1=5 x=matrix(rnorm(N*p),N,p) beta=rnorm(p1) xb=x[,1:p1] y=rnorm(N,xb) fiti=ADMMnet(x,y,penalty="Lasso",nlambda=10,nfolds=10) # Lasso # attributes(fiti) ### Cox model ### set.seed(1213) N=100;p=30;p1=5 x=matrix(rnorm(N*p),N,p) beta=rnorm(p1) xb=x[,1:p1] ty=rexp(N,exp(xb)) tcens=rbinom(n=N,prob=.3,size=1) # censoring indicator y=cbind(time=ty,status=1-tcens) fiti=ADMMnet(x,y,family="cox",penalty="Lasso",nlambda=10,nfolds=10) # Lasso # attributes(fiti)

[Package *ADMMnet* version 0.1.1 Index]