recur.bart {BART}R Documentation

BART for recurrent events

Description

Here we have implemented a simple and direct approach to utilize BART in survival analysis that is very flexible, and is akin to discrete-time survival analysis. Following the capabilities of BART, we allow for maximum flexibility in modeling the dependence of survival times on covariates. In particular, we do not impose proportional hazards.

To elaborate, consider data in the usual form: (t_i, \delta_i, {x}_i) where t_i is the event time, \delta_i is an indicator distinguishing events (\delta=1) from right-censoring (\delta=0), {x}_i is a vector of covariates, and i=1, ..., N indexes subjects.

We denote the K distinct event/censoring times by 0<t_{(1)}<...<t_{(K)}<\infty thus taking t_{(j)} to be the j^{th} order statistic among distinct observation times and, for convenience, t_{(0)}=0. Now consider event indicators y_{ij} for each subject i at each distinct time t_{(j)} up to and including the subject's observation time t_i=t_{(n_i)} with n_i=\sum_j I[t_{(j)}\leq t_i]. This means y_{ij}=0 if j<n_i and y_{in_i}=\delta_i.

We then denote by p_{ij} the probability of an event at time t_{(j)} conditional on no previous event. We now write the model for y_{ij} as a nonparametric probit regression of y_{ij} on the time t_{(j)} and the covariates {x}_i, and then utilize BART for binary responses. Specifically, y_{ij}\ =\ \delta_i I[t_i=t_{(j)}],\ j=1, ..., n_i ; we have p_{ij} = F(\mu_{ij}),\ \mu_{ij} = \mu_0+f(t_{(j)}, {x}_i) where F denotes the standard normal cdf (probit link). As in the binary response case, f is the sum of many tree models.

Usage


recur.bart(x.train=matrix(0,0,0),
           y.train=NULL, times=NULL, delta=NULL,
           x.test=matrix(0,0,0), x.test.nogrid=FALSE,
           sparse=FALSE, theta=0, omega=1,
           a=0.5, b=1, augment=FALSE, rho=NULL,
           xinfo=matrix(0,0,0), usequants=FALSE,
           
           rm.const=TRUE, type='pbart',
           ntype=as.integer(
               factor(type, levels=c('wbart', 'pbart', 'lbart'))),
           k=2, power=2, base=0.95,
           offset=NULL, tau.num=c(NA, 3, 6)[ntype], 
           ntree=50, numcut = 100L, ndpost=1000, nskip=250,
           keepevery=10, 
           
           
           printevery = 100L, 
           keeptrainfits = TRUE,
           seed=99,    ## mc.recur.bart only
           mc.cores=2, ## mc.recur.bart only
           nice=19L    ## mc.recur.bart only
         )

mc.recur.bart(x.train=matrix(0,0,0),
              y.train=NULL, times=NULL, delta=NULL,
              x.test=matrix(0,0,0), x.test.nogrid=FALSE,
              sparse=FALSE, theta=0, omega=1,
              a=0.5, b=1, augment=FALSE, rho=NULL,
              xinfo=matrix(0,0,0), usequants=FALSE,
              
              rm.const=TRUE, type='pbart',
              ntype=as.integer(
                  factor(type, levels=c('wbart', 'pbart', 'lbart'))),
              k=2, power=2, base=0.95,
              offset=NULL, tau.num=c(NA, 3, 6)[ntype], 
              ntree=50, numcut = 100L, ndpost=1000, nskip=250,
              keepevery=10, 
              
              
              printevery = 100L, 
              keeptrainfits = TRUE,
              seed=99,    ## mc.recur.bart only
              mc.cores=2, ## mc.recur.bart only
              nice=19L    ## mc.recur.bart only
            )

Arguments

x.train

Explanatory variables for training (in sample) data.
Must be a matrix with (as usual) rows corresponding to observations and columns to variables.
recur.bart will generate draws of f(t, x) for each x which is a row of x.train (note that the definition of x.train is dependent on whether y.train has been specified; see below).

y.train

Binary response dependent variable for training (in sample) data.
If y.train is NULL, then y.train (x.train and x.test, if specified) are generated by a call to recur.pre.bart (which require that times and delta be provided: see below); otherwise, y.train (x.train and x.test, if specified) are utilized as given assuming that the data construction has already been performed.

times

The time of event or right-censoring.
If y.train is NULL, then times (and delta) must be provided.

delta

The event indicator: 1 is an event while 0 is censored.
If y.train is NULL, then delta (and times) must be provided.

x.test

Explanatory variables for test (out of sample) data.
Must be a matrix and have the same structure as x.train.
recur.bart will generate draws of f(t, x) for each x which is a row of x.test.

x.test.nogrid

Occasionally, you do not need the entire time grid for x.test. If so, then for performance reasons, you can set this argument to TRUE.

sparse

Whether to perform variable selection based on a sparse Dirichlet prior rather than simply uniform; see Linero 2016.

theta

Set theta parameter; zero means random.

omega

Set omega parameter; zero means random.

a

Sparse parameter for Beta(a, b) prior: 0.5<=a<=1 where lower values inducing more sparsity.

b

Sparse parameter for Beta(a, b) prior; typically, b=1.

rho

Sparse parameter: typically rho=p where p is the number of covariates under consideration.

augment

Whether data augmentation is to be performed in sparse variable selection.

xinfo

You can provide the cutpoints to BART or let BART choose them for you. To provide them, use the xinfo argument to specify a list (matrix) where the items (rows) are the covariates and the contents of the items (columns) are the cutpoints.

usequants

If usequants=FALSE, then the cutpoints in xinfo are generated uniformly; otherwise, if TRUE, uniform quantiles are used for the cutpoints.

rm.const

Whether or not to remove constant variables.

type

Whether to employ Albert-Chib, 'pbart', or Holmes-Held, 'lbart'.

ntype

The integer equivalent of type where 'wbart' is 1, 'pbart' is 2 and 'lbart' is 3.

k

k is the number of prior standard deviations f(t, x) is away from +/-3. The bigger k is, the more conservative the fitting will be.

power

Power parameter for tree prior.

base

Base parameter for tree prior.

offset

With binary BART, the centering is P(Y=1 | x) = F(f(x) + offset) where offset defaults to F^{-1}(mean(y.train)). You can use the offset parameter to over-ride these defaults.

tau.num

The numerator in the tau definition, i.e., tau=tau.num/(k*sqrt(ntree)).

ntree

The number of trees in the sum.

numcut

The number of possible values of c (see usequants). If a single number if given, this is used for all variables. Otherwise a vector with length equal to ncol(x.train) is required, where the i^{th} element gives the number of c used for the i^{th} variable in x.train. If usequants is false, numcut equally spaced cutoffs are used covering the range of values in the corresponding column of x.train. If usequants is true, then min(numcut, the number of unique values in the corresponding columns of x.train - 1) c values are used.

ndpost

The number of posterior draws returned.

nskip

Number of MCMC iterations to be treated as burn in.

keepevery

Every keepevery draw is kept to be returned to the user.

printevery

As the MCMC runs, a message is printed every printevery draws.

keeptrainfits

Whether to keep yhat.train or not.

seed

mc.recur.bart only: seed required for reproducible MCMC.

mc.cores

mc.recur.bart only: number of cores to employ in parallel.

nice

mc.recur.bart only: set the job niceness. The default niceness is 19: niceness goes from 0 (highest) to 19 (lowest).

Value

recur.bart returns an object of type recurbart which is essentially a list. Besides the items listed below, the list has a binaryOffset component giving the value used, a times component giving the unique times, K which is the number of unique times, tx.train and tx.test, if any.

yhat.train

A matrix with ndpost rows and nrow(x.train) columns. Each row corresponds to a draw f^* from the posterior of f and each column corresponds to a row of x.train. The (i,j) value is f^*(t, x) for the i^{th} kept draw of f and the j^{th} row of x.train.
Burn-in is dropped.

haz.train

The hazard function, h(t|x), where x's are the rows of the training data.

cum.train

The cumulative hazard function, h(t|x), where x's are the rows of the training data.

yhat.test

Same as yhat.train but now the x's are the rows of the test data.

haz.test

The hazard function, h(t|x), where x's are the rows of the test data.

cum.test

The cumulative hazard function, h(t|x), where x's are the rows of the test data.

varcount

a matrix with ndpost rows and nrow(x.train) columns. Each row is for a draw. For each variable (corresponding to the columns), the total count of the number of times that variable is used in a tree decision rule (over all trees) is given.

Note that yhat.train and yhat.test are f(t, x) + binaryOffset. If you want draws of the probability P(Y=1 | t, x) you need to apply the normal cdf (pnorm) to these values.

See Also

recur.pre.bart, predict.recurbart, recur.pwbart, mc.recur.pwbart

Examples


## load 20 percent random sample
data(xdm20.train)
data(xdm20.test)
data(ydm20.train)

##test BART with token run to ensure installation works
## with current technology even a token run will violate CRAN policy
## set.seed(99)
## post <- recur.bart(x.train=xdm20.train, y.train=ydm20.train,
##                    nskip=1, ndpost=1, keepevery=1)

## Not run: 

## set.seed(99)
## post <- recur.bart(x.train=xdm20.train, y.train=ydm20.train,
##                    keeptrainfits=TRUE)

## larger data sets can take some time so, if parallel processing
## is available, submit this statement instead
post <- mc.recur.bart(x.train=xdm20.train, y.train=ydm20.train,
                      keeptrainfits=TRUE, mc.cores=8, seed=99)

require(rpart)
require(rpart.plot)

post$yhat.train.mean <- apply(post$yhat.train, 2, mean)
dss <- rpart(post$yhat.train.mean~xdm20.train)

rpart.plot(dss)
## for the 20 percent sample, notice that the top splits
## involve cci_pvd and n
## for the full data set, notice that all splits
## involve ca, cci_pud, cci_pvd, ins270 and n
## (except one at the bottom involving a small group)

## compare patients treated with insulin (ins270=1) vs
## not treated with insulin (ins270=0)
N <- 50 ## 50 training patients and 50 validation patients
K <- post$K ## 798 unique time points
NK <- 50*K

## only testing set, i.e., remove training set
xdm20.test. <- xdm20.test[NK+1:NK, post$rm.const]
xdm20.test. <- rbind(xdm20.test., xdm20.test.)
xdm20.test.[ , 'ins270'] <- rep(0:1, each=NK)

## multiple threads will be utilized if available
pred <- predict(post, xdm20.test., mc.cores=8)

## create Friedman's partial dependence function for the
## relative intensity for ins270 by time
M <- nrow(pred$haz.test) ## number of MCMC samples
RI <- matrix(0, M, K)
for(j in 1:K) {
    h <- seq(j, NK, by=K)
    RI[ , j] <- apply(pred$haz.test[ , h+NK]/
                      pred$haz.test[ , h], 1, mean)
}

RI.lo <- apply(RI, 2, quantile, probs=0.025)
RI.mu <- apply(RI, 2, mean)
RI.hi <- apply(RI, 2, quantile, probs=0.975)

plot(post$times, RI.hi, type='l', lty=2, log='y',
     ylim=c(min(RI.lo, 1/RI.hi), max(1/RI.lo, RI.hi)),
     xlab='t', ylab='RI(t, x)',
     sub='insulin(ins270=1) vs. no insulin(ins270=0)',
     main='Relative intensity of hospital admissions for diabetics')
lines(post$times, RI.mu)
lines(post$times, RI.lo, lty=2)
lines(post$times, rep(1, K), col='darkgray')

## RI for insulin therapy seems fairly constant with time
mean(RI.mu)


## End(Not run)

[Package BART version 2.9.7 Index]