tfCox {tfCox} | R Documentation |
Fit the additive trend filtering Cox model with a range of tuning parameters
Description
Fit additive trend filtering Cox model where each component function is estimated to be piecewise constant or polynomial.
Usage
tfCox(dat, ord=0, alpha=1, lambda.seq=NULL, discrete=NULL, n.lambda=30,
lambda.min.ratio = 0.01, tol=1e-6, niter=1000, stepSize=25, backtracking=0)
Arguments
dat |
A list that contains |
ord |
The polynomial order of the trend filtering fit; a non-negative interger ( |
alpha |
The trade-off between trend filtering penalty and group lasso penalty. It must be in [0,1]. |
lambda.seq |
A vector of non-negative tuning parameters. If provided, |
discrete |
A vector of covariate/feature indice that are discrete. Discrete covariates are not penalized in the model. Default |
n.lambda |
The number of lambda values to consider and the default is 30. |
lambda.min.ratio |
Smallest value for lambda.seq, as a fraction of the maximum lambda value, which is the smallest value such that the penalty term is zero. The default is 0.01. |
tol |
Convergence criterion for estimates. |
niter |
Maximum number of iterations. |
stepSize |
Initial step size. Default is 25. |
backtracking |
Whether backtracking should be used 1 (TRUE) or 0 (FALSE). Default is 0 (FALSE). |
Details
The optimization problem has the form
l(\theta)+\alpha\lambda\sum_{j=1}^p |D_jP_j\theta_j|_1+(1-\alpha)\lambda\sum_{j=1}^p|\theta_j|_2
where l
is the loss function defined as the negative log partial likelihood divided by n, and \alpha
provides a trade-off between trend filtering penalty and group lasso penalty. Covariate matrix X
is not standardized before solving the optimization problem.
Value
An object with S3 class "tfCox".
ord |
the polynomial order of the trend filtering fit. Specified by user (or default). |
alpha |
as specified by user (or default). |
lambda.seq |
vector of lambda values considered. |
theta.list |
list of estimated theta matrices of dimension n x p. Each component in the list corresponds to the fit from |
num.knots |
vector of number of knots of the estimated theta. Each component corresponds to the fit from |
num.nonsparse |
vector of proportion of non-sparse/non-zero covariates/features. Each component corresponds to the fit from |
dat |
as specified by user. |
Author(s)
Jiacheng Wu
References
Jiacheng Wu & Daniela Witten (2019) Flexible and Interpretable Models for Survival Data, Journal of Computational and Graphical Statistics, DOI: 10.1080/10618600.2019.1592758
See Also
summary.tfCox
, predict.tfCox
, plot.tfCox
, cv_tfCox
Examples
###################################################################
#constant trend filtering (fused lasso) with adaptively chosen knots
#generate data from simulation scenario 1 with piecewise constant functions
set.seed(1234)
dat = sim_dat(n=100, zerof=0, scenario=1)
#fit piecewise constant for alpha=1 and a range of lambda
fit = tfCox(dat, ord=0, alpha=1)
summary(fit)
#plot the fit of lambda index 15 and the first predictor
plot(fit, which.lambda=15, which.predictor=1)
#cross-validation to choose the tuning parameter lambda with fixed alpha=1
cv = cv_tfCox(dat, ord=0, alpha=1, n.fold=2)
summary(cv)
cv$best.lambda
#plot the cross-validation curve
plot(cv)
#fit the model with the best tuning parameter chosen by cross-validation
one.fit = tfCox(dat, ord=0, alpha=1, lambda.seq=cv$best.lambda)
#predict theta from the fitted tfCox object
theta_hat = predict(one.fit, newX=dat$X, which.lambda=1)
#plot the fitted theta_hat (line) with the true theta (dot)
for (i in 1:4) {
ordi = order(dat$X[,i])
plot(dat$X[ordi,i], dat$true_theta[ordi,i],
xlab=paste("predictor",i), ylab="theta" )
lines(dat$X[ordi,i], theta_hat[ordi,i], type="s")
}
#################################################################
#linear trend filtering with adaptively chosen knots
#generate data from simulation scenario 3 with piecewise linear functions
set.seed(1234)
dat = sim_dat(n=100, zerof=0, scenario=3)
#fit piecewise constant for alpha=1 and a range of lambda
fit = tfCox(dat, ord=1, alpha=1)
summary(fit)
#plot the fit of lambda index 15 and the first predictor
plot(fit, which.lambda=15, which.predictor=1)
#cross-validation to choose the tuning parameter lambda with fixed alpha=1
cv = cv_tfCox(dat, ord=1, alpha=1, n.fold=2)
summary(cv)
#plot the cross-validation curve
plot(cv)
#fit the model with the best tuning parameter chosen by cross-validation
one.fit = tfCox(dat, ord=1, alpha=1, lambda.seq=cv$best.lambda)
#predict theta from the fitted tfCox object
theta_hat = predict(one.fit, newX=dat$X, which.lambda=1)
#plot the fitted theta_hat (line) with the true theta (dot)
for (i in 1:4) {
ordi = order(dat$X[,i])
plot(dat$X[ordi,i], dat$true_theta[ordi,i],
xlab=paste("predictor",i), ylab="theta" )
lines(dat$X[ordi,i], theta_hat[ordi,i], type="l")
}