glmtrans {glmtrans}R Documentation

Fit a transfer learning generalized linear model (GLM) with elasticnet regularization.

Description

Fit a transfer learning generalized linear model through elastic net regularization with target data set and multiple source data sets. It also implements a transferable source detection algorithm, which helps avoid negative transfer in practice. Currently can deal with Gaussian, logistic and Poisson models.

Usage

glmtrans(
  target,
  source = NULL,
  family = c("gaussian", "binomial", "poisson"),
  transfer.source.id = "auto",
  alpha = 1,
  standardize = TRUE,
  intercept = TRUE,
  nfolds = 10,
  cores = 1,
  valid.proportion = NULL,
  valid.nfolds = 3,
  lambda = c(transfer = "lambda.1se", debias = "lambda.min", detection = "lambda.1se"),
  detection.info = TRUE,
  target.weights = NULL,
  source.weights = NULL,
  C0 = 2,
  ...
)

Arguments

target

target data. Should be a list with elements x and y, where x indicates a predictor matrix with each row/column as a(n) observation/variable, and y indicates the response vector.

source

source data. Should be a list with some sublists, where each of the sublist is a source data set, having elements x and y with the same meaning as in target data.

family

response type. Can be "gaussian", "binomial" or "poisson". Default = "gaussian".

  • "gaussian": Gaussian distribution.

  • "binomial": logistic distribution. When family = "binomial", the input response in both target and source should be 0/1.

  • "poisson": poisson distribution. When family = "poisson", the input response in both target and source should be non-negative.

transfer.source.id

transferable source indices. Can be either a subset of {1, ..., length(source)}, "all" or "auto". Default = "auto".

  • a subset of {1, ..., length(source)}: only transfer sources with the specific indices.

  • "all": transfer all sources.

  • "auto": run transferable source detection algorithm to automatically detect which sources to transfer. For the algorithm, refer to the documentation of function source_detection.

alpha

the elasticnet mixing parameter, with 0 \leq \alpha \leq 1. The penality is defined as

(1-\alpha)/2||\beta||_2^2+\alpha ||\beta||_1

. alpha = 1 encodes the lasso penalty while alpha = 0 encodes the ridge penalty. Default = 1.

standardize

the logical flag for x variable standardization, prior to fitting the model sequence. The coefficients are always returned on the original scale. Default is TRUE.

intercept

the logical indicator of whether the intercept should be fitted or not. Default = TRUE.

nfolds

the number of folds. Used in the cross-validation for GLM elastic net fitting procedure. Default = 10. Smallest value allowable is nfolds = 3.

cores

the number of cores used for parallel computing. Default = 1.

valid.proportion

the proportion of target data to be used as validation data when detecting transferable sources. Useful only when transfer.source.id = "auto". Default = NULL, meaning that the cross-validation will be applied.

valid.nfolds

the number of folds used in cross-validation procedure when detecting transferable sources. Useful only when transfer.source.id = "auto" and valid.proportion = NULL. Default = 3.

lambda

a vector indicating the choice of lambdas in transferring, debiasing and detection steps. Should be a vector with names "transfer", "debias", and "detection", each component of which can be either "lambda.min" or "lambda.1se". Component transfer is the lambda (the penalty parameter) used in transferrring step. Component debias is the lambda used in debiasing step. Component detection is the lambda used in the transferable source detection algorithm. Default choice of lambda.transfer and lambda.detection are "lambda.1se", while default lambda.debias = "lambda.min". If the user wants to change the default setting, input a vector with corresponding lambda.transfer/lambda.debias/lambda.detection names and corresponding values. Examples: lambda = list(transfer = "lambda.min", debias = "lambda.1se"); lambda = list(transfer = "lambda.min", detection = "lambda.min").

  • "lambda.min": value of lambda that gives minimum mean cross-validated error in the sequence of lambda.

  • "lambda.1se": largest value of lambda such that error is within 1 standard error of the minimum.

detection.info

the logistic flag indicating whether to print detection information or not. Useful only when transfer.source.id = "auto". Default = TURE.

target.weights

weight vector for each target instance. Should be a vector with the same length of target response. Default = NULL, which makes all instances equal-weighted.

source.weights

a list of weight vectors for the instances from each source. Should be a list with the same length of the number of sources. Default = NULL, which makes all instances equal-weighted.

C0

the constant used in the transferable source detection algorithm. See Algorithm 2 in Tian, Y. and Feng, Y., 2021. Default = 2.

...

additional arguments.

Value

An object with S3 class "glmtrans".

beta

the estimated coefficient vector.

family

the response type.

transfer.source.id

the transferable souce index. If in the input, transfer.source.id = 1:length(source) or transfer.source.id = "all", then the outputed transfer.source.id = 1:length(source). If the inputed transfer.source.id = "auto", only transferable source detected by the algorithm will be outputed.

fitting.list

a list of other parameters of the fitted model.

References

Tian, Y. and Feng, Y., 2021. Transfer Learning under High-dimensional Generalized Linear Models. arXiv preprint arXiv:2105.14328.

Li, S., Cai, T.T. and Li, H., 2020. Transfer learning for high-dimensional linear regression: Prediction, estimation, and minimax optimality. arXiv preprint arXiv:2006.10593.

Friedman, J., Hastie, T. and Tibshirani, R., 2010. Regularization paths for generalized linear models via coordinate descent. Journal of statistical software, 33(1), p.1.

Zou, H. and Hastie, T., 2005. Regularization and variable selection via the elastic net. Journal of the royal statistical society: series B (statistical methodology), 67(2), pp.301-320.

Tibshirani, R., 1996. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1), pp.267-288.

See Also

predict.glmtrans, source_detection, models, plot.glmtrans, cv.glmnet, glmnet.

Examples

set.seed(0, kind = "L'Ecuyer-CMRG")

# fit a linear regression model
D.training <- models("gaussian", type = "all", n.target = 100, K = 2, p = 500)
D.test <- models("gaussian", type = "target", n.target = 100, p = 500)
fit.gaussian <- glmtrans(D.training$target, D.training$source)
y.pred.glmtrans <- predict(fit.gaussian, D.test$target$x)

# compare the test MSE with classical Lasso fitted on target data
library(glmnet)
fit.lasso <- cv.glmnet(x = D.training$target$x, y = D.training$target$y)
y.pred.lasso <- predict(fit.lasso, D.test$target$x)

mean((y.pred.glmtrans - D.test$target$y)^2)
mean((y.pred.lasso - D.test$target$y)^2)


# fit a logistic regression model
D.training <- models("binomial", type = "all", n.target = 100, K = 2, p = 500)
D.test <- models("binomial", type = "target", n.target = 100, p = 500)
fit.binomial <- glmtrans(D.training$target, D.training$source, family = "binomial")
y.pred.glmtrans <- predict(fit.binomial, D.test$target$x, type = "class")

# compare the test error with classical Lasso fitted on target data
library(glmnet)
fit.lasso <- cv.glmnet(x = D.training$target$x, y = D.training$target$y, family = "binomial")
y.pred.lasso <- as.numeric(predict(fit.lasso, D.test$target$x, type = "class"))

mean(y.pred.glmtrans != D.test$target$y)
mean(y.pred.lasso != D.test$target$y)


# fit a Poisson regression model
D.training <- models("poisson", type = "all", n.target = 100, K = 2, p = 500)
D.test <- models("poisson", type = "target", n.target = 100, p = 500)
fit.poisson <- glmtrans(D.training$target, D.training$source, family = "poisson")
y.pred.glmtrans <- predict(fit.poisson, D.test$target$x, type = "response")

# compare the test MSE with classical Lasso fitted on target data
fit.lasso <- cv.glmnet(x = D.training$target$x, y = D.training$target$y, family = "poisson")
y.pred.lasso <- as.numeric(predict(fit.lasso, D.test$target$x, type = "response"))

mean((y.pred.glmtrans - D.test$target$y)^2)
mean((y.pred.lasso - D.test$target$y)^2)


[Package glmtrans version 2.0.0 Index]