fit.subgroup {personalized}R Documentation

Fitting subgroup identification models

Description

Fits subgroup identification model class of Chen, et al (2017)

Usage

fit.subgroup(
  x,
  y,
  trt,
  propensity.func = NULL,
  loss = c("sq_loss_lasso", "logistic_loss_lasso", "poisson_loss_lasso",
    "cox_loss_lasso", "owl_logistic_loss_lasso", "owl_logistic_flip_loss_lasso",
    "owl_hinge_loss", "owl_hinge_flip_loss", "sq_loss_lasso_gam",
    "poisson_loss_lasso_gam", "logistic_loss_lasso_gam", "sq_loss_gam",
    "poisson_loss_gam", "logistic_loss_gam", "owl_logistic_loss_gam",
    "owl_logistic_flip_loss_gam", "owl_logistic_loss_lasso_gam",
    "owl_logistic_flip_loss_lasso_gam", "sq_loss_xgboost", "custom"),
  method = c("weighting", "a_learning"),
  match.id = NULL,
  augment.func = NULL,
  fit.custom.loss = NULL,
  cutpoint = 0,
  larger.outcome.better = TRUE,
  reference.trt = NULL,
  retcall = TRUE,
  ...
)

Arguments

x

The design matrix (not including intercept term)

y

The response vector

trt

treatment vector with each element equal to a 0 or a 1, with 1 indicating treatment status is active.

propensity.func

function that inputs the design matrix x and the treatment vector trt and outputs the propensity score, ie Pr(trt = 1 | X = x). Function should take two arguments 1) x and 2) trt. See example below. For a randomized controlled trial this can simply be a function that returns a constant equal to the proportion of patients assigned to the treatment group, i.e.: propensity.func = function(x, trt) 0.5.

loss

choice of both the M function from Chen, et al (2017) and potentially the penalty used for variable selection. All loss options starting with sq_loss use M(y, v) = (v - y) ^ 2, all options starting with logistic_loss use the logistic loss: M(y, v) = y * log(1 + exp{-v}), and all options starting with cox_loss use the negative partial likelihood loss for the Cox PH model. All options ending with lasso have a lasso penalty added to the loss for variable selection. sq_loss_lasso_gam and logistic_loss_lasso_gam first use the lasso to select variables and then fit a generalized additive model with nonparametric additive terms for each selected variable. sq_loss_gam involves a squared error loss with a generalized additive model and no variable selection. sq_loss_xgboost involves a squared error loss with a gradient-boosted decision trees model using xgboost for the benefit score; this allows for flexible estimation using machine learning and can be useful when the underlying treatment-covariate interaction is complex. Must specify params, nrounds, nfold, and optionally, early_stopping_rounds; see xgb.train for details

  • Continuous Outcomes

    • "sq_loss_lasso" - M(y, v) = (v - y) ^ 2 with linear model and lasso penalty

    • "owl_logistic_loss_lasso" - M(y, v) = ylog(1 + exp{-v}) (method of Regularized Outcome Weighted Subgroup Identification)

    • "owl_logistic_flip_loss_lasso" - M(y, v) = |y|log(1 + exp{-sign(y)v})

    • "owl_hinge_loss" - M(y, v) = ymax(0, 1 - v) (method of Estimating individualized treatment rules using outcome weighted learning)

    • "owl_hinge_flip_loss" - M(y, v) = |y|max(0, 1 - sign(y)v)

    • "sq_loss_lasso_gam" - M(y, v) = (v - y) ^ 2 with variables selected by lasso penalty and generalized additive model fit on the selected variables

    • "sq_loss_gam" - M(y, v) = (v - y) ^ 2 with generalized additive model fit on all variables

    • "owl_logistic_loss_gam" - M(y, v) = ylog(1 + exp{-v}) with generalized additive model fit on all variables

    • "owl_logistic_flip_loss_gam" - M(y, v) = |y|log(1 + exp{-sign(y)v}) with generalized additive model fit on all variables

    • "owl_logistic_loss_lasso_gam" - M(y, v) = ylog(1 + exp{-v}) with variables selected by lasso penalty and generalized additive model fit on the selected variables

    • "owl_logistic_flip_loss_lasso_gam" - M(y, v) = |y|log(1 + exp{-sign(y)v}) with variables selected by lasso penalty and generalized additive model fit on the selected variables

    • "sq_loss_xgboost" - M(y, v) = (v - y) ^ 2 with gradient-boosted decision trees model

  • Binary Outcomes

    • All losses for continuous outcomes can be used plus the following:

    • "logistic_loss_lasso" - M(y, v) = -[yv - log(1 + exp{-v})] with with linear model and lasso penalty

    • "logistic_loss_lasso_gam" - M(y, v) = -[yv - log(1 + exp{-v})] with variables selected by lasso penalty and generalized additive model fit on the selected variables

    • "logistic_loss_gam" - M(y, v) = -[yv - log(1 + exp{-v})] with generalized additive model fit on all variables

  • Count Outcomes

    • All losses for continuous outcomes can be used plus the following:

    • "poisson_loss_lasso" - M(y, v) = -[yv - exp(v)] with with linear model and lasso penalty

    • "poisson_loss_lasso_gam" - M(y, v) = -[yv - exp(v)] with variables selected by lasso penalty and generalized additive model fit on the selected variables

    • "poisson_loss_gam" - M(y, v) = -[yv - exp(v)] with generalized additive model fit on all variables

  • Time-to-Event Outcomes

    • "cox_loss_lasso" - M corresponds to the negative partial likelihood of the cox model with linear model and additionally a lasso penalty

method

subgroup ID model type. Either the weighting or A-learning method of Chen et al, (2017)

match.id

a (character, factor, or integer) vector with length equal to the number of observations in x indicating using integers or levels of a factor vector which patients are in which matched groups. Defaults to NULL and assumes the samples are not from a matched cohort. Matched case-control groups can be created using any method (propensity score matching, optimal matching, etc). If each case is matched with a control or multiple controls, this would indicate which case-control pairs or groups go together. If match.id is supplied, then it is unecessary to specify a function via the propensity.func argument. A quick usage example: if the first patient is a case and the second and third are controls matched to it, and the fouth patient is a case and the fifth through seventh patients are matched with it, then the user should specify match.id = c(1,1,1,2,2,2,2) or match.id = c(rep("Grp1", 3),rep("Grp2", 4))

augment.func

function which inputs the response y, the covariates x, and trt and outputs predicted values (on the link scale) for the response using a model constructed with x. augment.func() can also be simply a function of x and y. This function is used for efficiency augmentation. When the form of the augmentation function is correct, it can provide efficient estimation of the subgroups. Some examples of possible augmentation functions are:

Example 1: augment.func <- function(x, y) {lmod <- lm(y ~ x); return(fitted(lmod))}

Example 2:

augment.func <- function(x, y, trt) {
    data <- data.frame(x, y, trt)
    lmod <- lm(y ~ x * trt)
    ## get predictions when trt = 1
    data$trt <- 1
    preds_1  <- predict(lmod, data)

    ## get predictions when trt = -1
    data$trt <- -1
    preds_n1 <- predict(lmod, data)

    ## return predictions averaged over trt
    return(0.5 * (preds_1 + preds_n1))
}

For binary and time-to-event outcomes, make sure that predictions are returned on the scale of the predictors

Example 3:

augment.func <- function(x, y) {
        bmod <- glm(y ~ x, family = binomial())
        return(predict(bmod, type = "link"))
    }
 
fit.custom.loss

A function which minimizes a user-specified custom loss function M(y,v) to be used in model fitting. If provided, fit.custom.loss should take the modified design matrix (which includes an intercept term) as an argument and the responses and optimize a custom weighted loss function.

The loss function M(y, v) to be minimized MUST meet the following two criteria:

  1. D_M(y, v) = \partial M(y, v)/\partial v must be increasing in v for each fixed y. D_M(y, v) is the partial derivative of the loss function M(y, v) with respect to v

  2. D_M(y, 0) is monotone in y

An example of a valid loss function is M(y, v) = (y - v)^2. In this case D_M(y, v) = -2(y - v). See Chen et al. (2017) for more details on the restrictions on the loss function M(y, v).

The provided function MUST return a list with the following elements:

  • predict a function that inputs a design matrix and a 'type' argument for the type of predictions and outputs a vector of predictions on the scale of the linear predictor. Note that the matrix provided to 'fit.custom.loss' has a column appended to the first column of x corresponding to the treatment main effect. Thus, the prediction function should deal with this, e.g. predict(model, cbind(1, x))

  • model a fitted model object returned by the underlying fitting function

  • coefficients if the underlying fitting function yields a vector of coefficient estimates, they should be provided here

The provided function MUST be a function with the following arguments:

  1. x design matrix

  2. y vector of responses

  3. weights vector for observations weights. The underlying loss function MUST have samples weighted according to this vector. See below example

  4. ... additional arguments passed via '...'. This can be used so that users can specify more arguments to the underlying fitting function if so desired.

The provided function can also optionally take the following arguments:

  • match.id vector of case/control cluster IDs. This is useful if cross validation is used in the underlying fitting function in which case it is advisable to sample whole clusters randomly instead of individual observations.

  • offset if efficiency augmentation is used, the predictions from the outcome model from augment.func will be provided via the offset argument, which can be used as an offset in the underlying fitting function as a means of incorporating the efficiency augmentation model's predictions

  • trt vector of treatment statuses

  • family family of outcome

  • n.trts numer of treatment levels. Can be useful if there are more than 2 treatment levels

Example 1: Here we minimize M(y, v) = (y - v)^2

 fit.custom.loss <- function(x, y, weights, ...) {
     df <- data.frame(y = y, x)

     # minimize squared error loss with NO lasso penalty
     lmf <- lm(y ~ x - 1, weights = weights,
               data = df, ...)

     # save coefficients
     cfs = coef(lmf)

     # create prediction function. Notice
     # how a column of 1's is appended
     # to ensure treatment main effects are included
     # in predictions
     prd = function(x, type = "response")
     {
         dfte <- cbind(1, x)
         colnames(dfte) <- names(cfs)
         predict(lmf, data.frame(dfte))
     }
     # return lost of required components
     list(predict = prd, model = lmf, coefficients = cfs)
 }
 

Example 2: M(y, v) = y\exp(-v)

 fit.expo.loss <- function(x, y, weights, ...)
 {
     ## define loss function to be minimized
     expo.loss <- function(beta, x, y, weights) {
         sum(weights * y * exp(-drop(tcrossprod(x, t(beta) )))
     }

     # use optim() to minimize loss function
     opt <- optim(rep(0, NCOL(x)), fn = expo.loss, x = x, y = y, weights = weights)

     coefs <- opt$par

     pred <- function(x, type = "response") {
         tcrossprod(cbind(1, x), t(coefs))
     }

     # return list of required components
     list(predict = pred, model = opt, coefficients = coefs)
 }
 
cutpoint

numeric value for patients with benefit scores above which (or below which if larger.outcome.better = FALSE) will be recommended to be in the treatment group. Can also set cutpoint = "median", which will use the median value of the benefit scores as the cutpoint or can set specific quantile values via "quantx" where "x" is a number between 0 and 100 representing the quantile value; e.g. cutpoint = "quant75" will use the 75th perent upper quantile of the benefit scores as the quantile.

larger.outcome.better

boolean value of whether a larger outcome is better/preferable. Set to TRUE if a larger outcome is better/preferable and set to FALSE if a smaller outcome is better/preferable. Defaults to TRUE.

reference.trt

which treatment should be treated as the reference treatment. Defaults to the first level of trt if trt is a factor or the first alphabetical or numerically first treatment level. Not used for multiple treatment fitting with OWL-type losses.

retcall

boolean value. if TRUE then the passed arguments will be saved. Do not set to FALSE if the validate.subgroup() function will later be used for your fitted subgroup model. Only set to FALSE if memory is limited as setting to TRUE saves the design matrix to the fitted object

...

options to be passed to underlying fitting function. For all loss options with 'lasso', this will be passed to cv.glmnet. For all loss options with 'gam', this will be passed to gam from the mgcv package Note that for all loss options that use gam() from the mgcv package, the user cannot supply the gam argument method because it is also an argument of fit.subgroup, so instead, to change the gam method argument, supply method.gam, ie method.gam = "REML".

For all loss options with 'hinge', this will be passed to both weighted.ksvm and ipop from the kernlab package

Value

An object of class "subgroup_fitted".

predict

A function that returns predictions of the covariate-conditional treatment effects

model

An object returned by the underlying fitting function used. For example, if the lasso use used to fit the underlying subgroup identification model, this will be an object returned by cv.glmnet.

coefficients

If the underlying subgroup identification model is parametric, coefficients will contain the estimated coefficients of the model.

call

The call that produced the returned object. If retcall = TRUE, this will contain all objects supplied to fit.subgroup()

family

The family corresponding to the outcome provided

loss

The loss function used

method

The method used (either weighting or A-learning)

propensity.func

The propensity score function used

larger.outcome.better

If larger outcomes are preferred for this model

cutpoint

Benefit score cutoff value used for determining subgroups

var.names

The names of all variables used

n.trts

The number of treatment levels

comparison.trts

All treatment levels other than the reference level

reference.trt

The reference level for the treatment. This should usually be the control group/level

trts

All treatment levels

trt.received

The vector of treatment assignments

pi.x

A vector of propensity scores

y

A vector of outcomes

benefit.scores

A vector of conditional treatment effects, i.e. benefit scores

recommended.trts

A vector of treatment recommendations (i.e. for each patient, which treatment results in the best expected potential outcomes)

subgroup.trt.effects

(Biased) estimates of the conditional treatment effects and conditional outcomes. These are essentially just empirical averages within different combinations of treatment assignments and treatment recommendations

individual.trt.effects

estimates of the individual treatment effects as returned by treat.effects

References

Huling. J.D. and Yu, M. (2021), Subgroup Identification Using the personalized Package. Journal of Statistical Software 98(5), 1-60. doi:10.18637/jss.v098.i05

Chen, S., Tian, L., Cai, T. and Yu, M. (2017), A general statistical framework for subgroup identification and comparative treatment scoring. Biometrics. doi:10.1111/biom.12676 doi:10.1111/biom.12676

Xu, Y., Yu, M., Zhao, Y. Q., Li, Q., Wang, S., & Shao, J. (2015), Regularized outcome weighted subgroup identification for differential treatment effects. Biometrics, 71(3), 645-653. doi: 10.1111/biom.12322 doi:10.1111/biom.12322

Zhao, Y., Zeng, D., Rush, A. J., & Kosorok, M. R. (2012), Estimating individualized treatment rules using outcome weighted learning. Journal of the American Statistical Association, 107(499), 1106-1118. doi: 10.1080/01621459.2012.695674

See Also

validate.subgroup for function which creates validation results for subgroup identification models, predict.subgroup_fitted for a prediction function for fitted models from fit.subgroup, plot.subgroup_fitted for a function which plots results from fitted models, and print.subgroup_fitted for arguments for printing options for fit.subgroup(). from fit.subgroup.

Examples

library(personalized)

set.seed(123)
n.obs  <- 500
n.vars <- 15
x <- matrix(rnorm(n.obs * n.vars, sd = 3), n.obs, n.vars)


# simulate non-randomized treatment
xbetat   <- 0.5 + 0.5 * x[,7] - 0.5 * x[,9]
trt.prob <- exp(xbetat) / (1 + exp(xbetat))
trt01    <- rbinom(n.obs, 1, prob = trt.prob)

trt      <- 2 * trt01 - 1

# simulate response
# delta below drives treatment effect heterogeneity
delta <- 2 * (0.5 + x[,2] - x[,3] - x[,11] + x[,1] * x[,12] )
xbeta <- x[,1] + x[,11] - 2 * x[,12]^2 + x[,13] + 0.5 * x[,15] ^ 2
xbeta <- xbeta + delta * trt

# continuous outcomes
y <- drop(xbeta) + rnorm(n.obs, sd = 2)

# binary outcomes
y.binary <- 1 * (xbeta + rnorm(n.obs, sd = 2) > 0 )

# count outcomes
y.count <- round(abs(xbeta + rnorm(n.obs, sd = 2)))

# time-to-event outcomes
surv.time <- exp(-20 - xbeta + rnorm(n.obs, sd = 1))
cens.time <- exp(rnorm(n.obs, sd = 3))
y.time.to.event  <- pmin(surv.time, cens.time)
status           <- 1 * (surv.time <= cens.time)

# create function for fitting propensity score model
prop.func <- function(x, trt)
{
    # fit propensity score model
    propens.model <- cv.glmnet(y = trt,
                               x = x, family = "binomial")
    pi.x <- predict(propens.model, s = "lambda.min",
                    newx = x, type = "response")[,1]
    pi.x
}


####################  Continuous outcomes ################################


subgrp.model <- fit.subgroup(x = x, y = y,
                           trt = trt01,
                           propensity.func = prop.func,
                           loss   = "sq_loss_lasso",
                           # option for cv.glmnet,
                           # better to use 'nfolds=10'
                           nfolds = 3)

summary(subgrp.model)

# estimates of the individual-specific
# treatment effect estimates:
subgrp.model$individual.trt.effects

# fit lasso + gam model with REML option for gam


subgrp.modelg <- fit.subgroup(x = x, y = y,
                            trt = trt01,
                            propensity.func = prop.func,
                            loss   = "sq_loss_lasso_gam",
                            method.gam = "REML",     # option for gam
                            nfolds = 5)              # option for cv.glmnet

subgrp.modelg


####################  Using an augmentation function #####################
## augmentation funcions involve modeling the conditional mean E[Y|T, X]
## and returning predictions that are averaged over the treatment values
## return <- 1/2 * (hat{E}[Y|T=1, X] + hat{E}[Y|T=-1, X])
##########################################################################

augment.func <- function(x, y, trt) {
    data <- data.frame(x, y, trt)
    xm <- model.matrix(y~trt*x-1, data = data)

    lmod <- cv.glmnet(y = y, x = xm)
    ## get predictions when trt = 1
    data$trt <- 1
    xm <- model.matrix(y~trt*x-1, data = data)
    preds_1  <- predict(lmod, xm, s = "lambda.min")

    ## get predictions when trt = -1
    data$trt <- -1
    xm <- model.matrix(y~trt*x-1, data = data)
    preds_n1  <- predict(lmod, xm, s = "lambda.min")

    ## return predictions averaged over trt
    return(0.5 * (preds_1 + preds_n1))
}


subgrp.model.aug <- fit.subgroup(x = x, y = y,
                           trt = trt01,
                           propensity.func = prop.func,
                           augment.func    = augment.func,
                           loss   = "sq_loss_lasso",
                           # option for cv.glmnet,
                           # better to use 'nfolds=10'
                           nfolds = 3)              # option for cv.glmnet

summary(subgrp.model.aug)


####################  Binary outcomes ####################################

# use logistic loss for binary outcomes
subgrp.model.bin <- fit.subgroup(x = x, y = y.binary,
                           trt = trt01,
                           propensity.func = prop.func,
                           loss   = "logistic_loss_lasso",
                           type.measure = "auc",    # option for cv.glmnet
                           nfolds = 3)              # option for cv.glmnet

subgrp.model.bin


####################  Count outcomes #####################################

# use poisson loss for count/poisson outcomes
subgrp.model.poisson <- fit.subgroup(x = x, y = y.count,
                           trt = trt01,
                           propensity.func = prop.func,
                           loss   = "poisson_loss_lasso",
                           type.measure = "mse",    # option for cv.glmnet
                           nfolds = 3)              # option for cv.glmnet

subgrp.model.poisson


####################  Time-to-event outcomes #############################

library(survival)

subgrp.model.cox <- fit.subgroup(x = x, y = Surv(y.time.to.event, status),
                           trt = trt01,
                           propensity.func = prop.func,
                           loss   = "cox_loss_lasso",
                           nfolds = 3)              # option for cv.glmnet

subgrp.model.cox



####################  Using custom loss functions ########################

## Use custom loss function for binary outcomes

fit.custom.loss.bin <- function(x, y, weights, offset, ...) {
    df <- data.frame(y = y, x)

    # minimize logistic loss with NO lasso penalty
    # with allowance for efficiency augmentation
    glmf <- glm(y ~ x - 1, weights = weights,
                offset = offset, # offset term allows for efficiency augmentation
                family = binomial(), ...)

    # save coefficients
    cfs = coef(glmf)

    # create prediction function.
    prd = function(x, type = "response") {
         dfte <- cbind(1, x)
         colnames(dfte) <- names(cfs)
         ## predictions must be returned on the scale
         ## of the linear predictor
         predict(glmf, data.frame(dfte), type = "link")
    }
    # return lost of required components
    list(predict = prd, model = glmf, coefficients = cfs)
}


subgrp.model.bin.cust <- fit.subgroup(x = x, y = y.binary,
                                 trt = trt01,
                                 propensity.func = prop.func,
                                 fit.custom.loss = fit.custom.loss.bin)

subgrp.model.bin.cust



## try exponential loss for
## positive outcomes

fit.expo.loss <- function(x, y, weights, ...)
{
    expo.loss <- function(beta, x, y, weights) {
        sum(weights * y * exp(-drop(x %*% beta)))
    }

    # use optim() to minimize loss function
    opt <- optim(rep(0, NCOL(x)), fn = expo.loss, x = x, y = y, weights = weights)

    coefs <- opt$par

    pred <- function(x, type = "response") {
        tcrossprod(cbind(1, x), t(coefs))
    }

    # return list of required components
    list(predict = pred, model = opt, coefficients = coefs)
}


# use exponential loss for positive outcomes
subgrp.model.expo <- fit.subgroup(x = x, y = y.count,
                                  trt = trt01,
                                  propensity.func = prop.func,
                                  fit.custom.loss = fit.expo.loss)

subgrp.model.expo




[Package personalized version 0.2.7 Index]