mice.impute.rfcont {CALIBERrfimpute}R Documentation

Impute continuous variables using Random Forest within MICE

Description

This method can be used to impute continuous variables in MICE by specifying method = 'rfcont'. It was developed independently from the mice.impute.rf algorithm of Doove et al., and differs from it in drawing imputed values from a normal distribution.

Usage

mice.impute.rfcont(y, ry, x, ntree_cont = NULL,
    nodesize_cont = NULL, maxnodes_cont = NULL, ntree = NULL, ...)

Arguments

y

a vector of observed values and missing values of the variable to be imputed.

ry

a logical vector stating whether y is observed or not.

x

a matrix of predictors to impute y.

ntree_cont

number of trees, default = 10.

A global option can be set thus: setRFoptions(ntree_cont=10).

nodesize_cont

minimum size of nodes, default = 5.

A global option can be set thus: setRFoptions(nodesize_cont=5). Smaller values of nodesize create finer, more precise trees but increase the computation time.

maxnodes_cont

maximum number of nodes, default NULL. If NULL the number of nodes is determined by number of observations and nodesize_cont.

ntree

an alternative argument for specifying the number of trees, over-ridden by ntree_cont. This is for consistency with the mice.impute.rf function.

...

other arguments to pass to randomForest.

Details

This Random Forest imputation algorithm has been developed as an alternative to normal-based linear regression, and can accommodate non-linear relations and interactions among the predictor variables without requiring them to be specified in the model. The algorithm takes a bootstrap sample of the data to simulate sampling variability, fits a regression forest trees and calculates the out-of-bag mean squared error. Each value is imputed as a random draw from a normal distribution with mean defined by the Random Forest prediction and variance equal to the out-of-bag mean squared error.

If only one tree is used (not recommended), a bootstrap sample is not taken in the first stage because the Random Forest algorithm performs an internal bootstrap sample before fitting the tree.

Value

A vector of imputed values of y.

Note

This algorithm has been tested on simulated data with linear regression, and in survival analysis of real data with artificially introduced missingness at random. On the simulated data there was slight bias if the distribution of missing values was very different from observed values, because imputed values were closer to the centre of the data than the missing values. However in the survival analysis the hazard ratios were unbiased and coverage of confidence intervals more conservative than normal-based MICE, but the mean length of confidence intervals was shorter with mice.impute.rfcont.

Author(s)

Anoop Shah

References

Shah AD, Bartlett JW, Carpenter J, Nicholas O, Hemingway H. Comparison of Random Forest and parametric imputation models for imputing missing data using MICE: a CALIBER study. American Journal of Epidemiology 2014; 179(6): 764–774. doi:10.1093/aje/kwt312 https://academic.oup.com/aje/article/179/6/764/107562

See Also

setRFoptions, mice.impute.rfcat, mice, mice.impute.rf, mice.impute.cart, randomForest

Examples

set.seed(1)

# A small dataset with a single row to be imputed
mydata <- data.frame(x1 = c(2, 3, NA, 4, 5, 1, 6, 8, 7, 9), x2 = 1:10,
    x3 = c(1, 3, NA, 4, 2, 8, 7, 9, 6, 5))
mice(mydata, method = c('norm', 'norm', 'norm'), m = 2, maxit = 2)
mice(mydata[, 1:2], method = c('rfcont', 'rfcont'), m = 2, maxit = 2)
mice(mydata, method = c('rfcont', 'rfcont', 'rfcont'), m = 2, maxit = 2)

# A larger simulated dataset
mydata <- simdata(100)
cat('\nSimulated multivariate normal data:\n')
print(data.frame(mean = colMeans(mydata), sd = sapply(mydata, sd)))

# Apply missingness pattern
mymardata <- makemar(mydata)
cat('\nNumber of missing values:\n')
print(sapply(mymardata, function(x){sum(is.na(x))}))

# Test imputation of a single column in a two-column dataset
cat('\nTest imputation of a simple dataset')
print(mice(mymardata[, c('y', 'x1')], method = 'rfcont'))

# Analyse data
cat('\nFull data analysis:\n')
print(summary(lm(y ~ x1 + x2 + x3, data=mydata)))

cat('\nMICE using normal-based linear regression:\n')
print(summary(pool(with(mice(mymardata,
    method = 'norm'), lm(y ~ x1 + x2 + x3)))))

# Set options for Random Forest
setRFoptions(ntree_cont = 10)

cat('\nMICE using Random Forest:\n')
print(summary(pool(with(mice(mymardata,
    method = 'rfcont'), lm(y ~ x1 + x2 + x3)))))

[Package CALIBERrfimpute version 1.0-7 Index]