tune.fit {SIS}R Documentation

Using the glmnet and ncvreg packages, fits a Generalized Linear Model or Cox Proportional Hazards Model using various methods for choosing the regularization parameter \lambda

Description

This function fits a generalized linear model or a Cox proportional hazards model via penalized maximum likelihood, with available penalties as indicated in the glmnet and ncvreg packages. Instead of providing the whole regularization solution path, the function returns the solution at a unique value of \lambda, the one optimizing the criterion specified in tune.

Usage

tune.fit(
  x,
  y,
  family = c("gaussian", "binomial", "poisson", "cox"),
  penalty = c("SCAD", "MCP", "lasso"),
  concavity.parameter = switch(penalty, SCAD = 3.7, 3),
  tune = c("cv", "aic", "bic", "ebic"),
  nfolds = 10,
  type.measure = c("deviance", "class", "auc", "mse", "mae"),
  gamma.ebic = 1
)

Arguments

x

The design matrix, of dimensions n * p, without an intercept. Each row is an observation vector.

y

The response vector of dimension n * 1. Quantitative for family='gaussian', non-negative counts for family='poisson', binary (0-1) for family='binomial'. For family='cox', y should be an object of class Surv, as provided by the function Surv() in the package survival.

family

Response type (see above).

penalty

The penalty to be applied in the regularized likelihood subproblems. 'SCAD' (the default), 'MCP', or 'lasso' are provided.

concavity.parameter

The tuning parameter used to adjust the concavity of the SCAD/MCP penalty. Default is 3.7 for SCAD and 3 for MCP.

tune

Method for selecting the regularization parameter along the solution path of the penalized likelihood problem. Options to provide a final model include tune='cv', tune='aic', tune='bic', and tune='ebic'. See references at the end for details.

nfolds

Number of folds used in cross-validation. The default is 10.

type.measure

Loss to use for cross-validation. Currently five options, not all available for all models. The default is type.measure='deviance', which uses squared-error for gaussian models (also equivalent to type.measure='mse' in this case), deviance for logistic and poisson regression, and partial-likelihood for the Cox model. Both type.measure='class' and type.measure='auc' apply only to logistic regression and give misclassification error and area under the ROC curve, respectively. type.measure='mse' or type.measure='mae' (mean absolute error) can be used by all models except the 'cox'; they measure the deviation from the fitted mean to the response. For penalty='SCAD' and penalty='MCP', only type.measure='deviance' is available.

gamma.ebic

Specifies the parameter in the Extended BIC criterion penalizing the size of the corresponding model space. The default is gamma.ebic=1. See references at the end for details.

Value

Returns an object with

ix

The vector of indices of the nonzero coefficients selected by the maximum penalized likelihood procedure with tune as the method for choosing the regularization parameter.

a0

The intercept of the final model selected by tune.

beta

The vector of coefficients of the final model selected by tune.

fit

The fitted penalized regression object.

lambda

The corresponding lambda in the final model.

lambda.ind

The index on the solution path for the final model.

Author(s)

Jianqing Fan, Yang Feng, Diego Franco Saldana, Richard Samworth, and Yichao Wu

References

Jerome Friedman and Trevor Hastie and Rob Tibshirani (2010) Regularization Paths for Generalized Linear Models Via Coordinate Descent. Journal of Statistical Software, 33(1), 1-22.

Noah Simon and Jerome Friedman and Trevor Hastie and Rob Tibshirani (2011) Regularization Paths for Cox's Proportional Hazards Model Via Coordinate Descent. Journal of Statistical Software, 39(5), 1-13.

Patrick Breheny and Jian Huang (2011) Coordiante Descent Algorithms for Nonconvex Penalized Regression, with Applications to Biological Feature Selection. The Annals of Applied Statistics, 5, 232-253.

Hirotogu Akaike (1973) Information Theory and an Extension of the Maximum Likelihood Principle. In Proceedings of the 2nd International Symposium on Information Theory, BN Petrov and F Csaki (eds.), 267-281.

Gideon Schwarz (1978) Estimating the Dimension of a Model. The Annals of Statistics, 6, 461-464.

Jiahua Chen and Zehua Chen (2008) Extended Bayesian Information Criteria for Model Selection with Large Model Spaces. Biometrika, 95, 759-771.

Examples



set.seed(0)
data('leukemia.train', package = 'SIS')
y.train = leukemia.train[,dim(leukemia.train)[2]]
x.train = as.matrix(leukemia.train[,-dim(leukemia.train)[2]])
x.train = standardize(x.train)
model = tune.fit(x.train[,1:3500], y.train, family='binomial', tune='bic')
model$ix
model$a0
model$beta



[Package SIS version 0.8-8 Index]