cocktail {fastcox} | R Documentation |
Fits the regularization paths for the elastic net penalized Cox's model
Description
Fits a regularization path for the elastic net penalized Cox's model at a sequence of regularization parameters lambda.
Usage
cocktail(x,y,d,
nlambda=100,
lambda.min=ifelse(nobs<nvars,1e-2,1e-4),
lambda=NULL,
alpha=1,
pf=rep(1,nvars),
exclude,
dfmax=nvars+1,
pmax=min(dfmax*1.2,nvars),
standardize=TRUE,
eps=1e-6,
maxit=3e4)
Arguments
x |
matrix of predictors, of dimension |
y |
a survival time for Cox models. Currently tied failure times are not supported. |
d |
censor status with 1 if died and 0 if right censored. |
nlambda |
the number of |
lambda.min |
given as a fraction of |
lambda |
a user supplied |
alpha |
The elasticnet mixing parameter, with |
pf |
separate penalty weights can be applied to each coefficient of |
exclude |
indices of variables to be excluded from the model. Default is none. Equivalent to an infinite penalty factor. |
dfmax |
limit the maximum number of variables in the
model. Useful for very large |
pmax |
limit the maximum number of variables ever to be nonzero. For example once |
standardize |
logical flag for variable standardization, prior to
fitting the model sequence. If |
eps |
convergence threshold for coordinate majorization descent. Each inner
coordinate majorization descent loop continues until the relative change in any
coefficient (i.e. |
maxit |
maximum number of outer-loop iterations allowed at fixed lambda value. Default is 1e4. If models do not converge, consider increasing |
Details
The algorithm estimates \beta
based on observed data, through elastic net penalized log partial likelihood of Cox's model.
\arg\min(-loglik(Data,\beta)+\lambda*P(\beta))
It can compute estimates at a fine grid of values of \lambda
s in order to pick up a data-driven optimal \lambda
for fitting a 'best' final model. The penalty is a combination of l1 and l2 penalty:
P(\beta)=(1-\alpha)/2||\beta||_2^2+\alpha||\beta||_1.
alpha=1
is the lasso penalty.
For computing speed reason, if models are not converging or running slow, consider increasing eps
, decreasing
nlambda
, or increasing lambda.min
before increasing
maxit
.
FAQ:
Question: “I am not sure how are we optimizing alpha. I can get optimal lambda for each value of alpha. But how do I select optimum alpha?”
Answer: cv.cocktail
only finds the optimal lambda given alpha fixed. So to
chose a good alpha you need to fit CV on a grid of alpha, say (0.1,0.3, 0.6, 0.9, 1) and let cv.cocktail choose the optimal lambda for each alpha, then you choose the (alpha, lambda) pair that corresponds to the lowest predicted deviance.
Question: “I understand your are referring to minimizing the quantity cv.cocktail\$cvm
, the mean 'cross-validated error' to optimize alpha and lambda as you did in your implementation. However, I don't know what the equation of this error is and this error is not referred to in your paper either. Do you mind explaining what this is?
”
Answer: We first define the log partial-likelihood for the Cox model. Assume
\hat{\beta}^{[k]}
is the estimate fitted on k
-th fold, define the log partial likelihood function as
L(Data,\hat{\beta}[k])=\sum_{s=1}^{S} x_{i_{s}}^{T}\hat{\beta}[k]-\log(\sum_{i\in R_{s}}\exp(x_{i}^{T}\hat{\beta}[k])).
Then the log partial-likelihood deviance of the k
-th fold is defined
as
D[Data,k]=-2(L(Data,\hat{\beta}[k])).
We now define the measurement we actually use for cross validation:
it is the difference between the log partial-likelihood deviance evaluated
on the full dataset and that evaluated on the on the dataset with
k
-th fold excluded. The cross-validated error is defined as
CV-ERR[k]=D(Data[full],k)-D(Data[k^{th}\,\,fold\,\,excluded],k).
Value
An object with S3 class cocktail
.
call |
the call that produced this object |
beta |
a |
lambda |
the actual sequence of |
df |
the number of nonzero coefficients for each value of
|
dim |
dimension of coefficient matrix (ices) |
npasses |
total number of iterations (the most inner loop) summed over all lambda values |
jerr |
error flag, for warnings and errors, 0 if no error. |
Author(s)
Yi Yang and Hui Zou
Maintainer: Yi Yang <yi.yang6@mcgill.ca>
References
Yang, Y. and Zou, H. (2013),
"A Cocktail Algorithm for Solving The Elastic Net Penalized Cox's Regression in High Dimensions", Statistics and Its Interface, 6:2, 167-173.
https://github.com/emeryyi/fastcox
See Also
plot.cocktail
Examples
data(FHT)
m1<-cocktail(x=FHT$x,y=FHT$y,d=FHT$status,alpha=0.5)
predict(m1,type="nonzero")
plot(m1)