booster {rbooster}R Documentation

AdaBoost Framework for Any Classifier

Description

This function allows you to use any classifier to be used in Discrete or Real AdaBoost framework.

Usage

booster(
  x_train,
  y_train,
  classifier = "rpart",
  predictor = NULL,
  method = "discrete",
  x_test = NULL,
  y_test = NULL,
  weighted_bootstrap = FALSE,
  max_iter = 50,
  lambda = 1,
  print_detail = TRUE,
  print_plot = FALSE,
  bag_frac = 0.5,
  p_weak = NULL,
  ...
)

discrete_adaboost(
  x_train,
  y_train,
  classifier = "rpart",
  predictor = NULL,
  x_test = NULL,
  y_test = NULL,
  weighted_bootstrap = FALSE,
  max_iter = 50,
  lambda = 1,
  print_detail = TRUE,
  print_plot = FALSE,
  bag_frac = 0.5,
  p_weak = NULL,
  ...
)

real_adaboost(
  x_train,
  y_train,
  classifier = "rpart",
  predictor = NULL,
  x_test = NULL,
  y_test = NULL,
  weighted_bootstrap = FALSE,
  max_iter = 50,
  lambda = 1,
  print_detail = TRUE,
  print_plot = FALSE,
  bag_frac = 0.5,
  p_weak = NULL,
  ...
)

Arguments

x_train

feature matrix.

y_train

a factor class variable. Boosting algorithm allows for k >= 2. However, not all classifiers are capable of multiclass classification.

classifier

pre-ready or a custom classifier function. Pre-ready classifiers are "rpart", "glm", "gnb", "dnb", "earth".

predictor

prediction function for classifier. It's output must be a factor variable with the same levels of y_train

method

"discrete" or "real" for Discrete or Real Adaboost.

x_test

optional test feature matrix. Can be used instead of predict function. print_detail and print_plot gives information about test.

y_test

optional a factor test class variable with the same levels as y_train. Can be used instead of predict function. print_detail and print_plot gives information about test.

weighted_bootstrap

If classifier does not support case weights, weighted_bootstrap must be TRUE used for weighting. If classifier supports weights, it must be FALSE. default is FALSE.

max_iter

maximum number of iterations. Default to 30. Probably should be higher for classifiers other than decision tree.

lambda

a parameter for model weights. Default to 1. Higher values leads to unstable weak classifiers, which is good sometimes. Lower values leads to slower fitting.

print_detail

a logical for printing errors for each iteration. Default to TRUE

print_plot

a logical for plotting errors. Default to FALSE.

bag_frac

a value between 0 and 1. It represents the proportion of cases to be used in each iteration. Smaller datasets may be better to create weaker classifiers. 1 means all cases. Default to 0.5. Ignored if weighted_bootstrap == TRUE.

p_weak

number of variables to use in weak classifiers. It is the number of columns in x_train by default. Lower values lead to weaker classifiers.

...

additional arguments for classifier and predictor functions. weak classifiers.

Details

method can be "discrete" and "real" at the moment and indicates Discrete AdaBoost and Real AdaBoost. For multiclass classification, "discrete" means SAMME, "real" means SAMME.R algorithm.

Pre-ready classifiers are "rpart", "glm", "dnb", "gnb", "earth", which means CART, logistic regression, Gaussian naive bayes, discrete naive bayes and MARS classifier respectively.

predictor is valid only if a custom classifier function is given. A custom classifier funtion should be as function(x_train, y_train, weights, ...) and its output is a model object which can be placed in predictor. predictor function is function(model, x_new, type ...) and its output must be a vector of class predictions. type must be "pred" or "prob", which gives a vector of classes or a matrix of probabilities, which each column represents each class. See vignette("booster", package = "booster") for examples.

lambda is a multiplier of model weights.

weighted_bootstrap is for bootstrap sampling in each step. If the classifier accepts case weights then it is better to turn it off. If classifier does not accept case weights, then weighted bootstrap will make it into weighted classifier using bootstrap. Learning may be slower this way.

bag_frac helps a classifier to be "weaker" by reducing sample size. Stronger classifiers may require lower proportions of bag_frac. p_weak does the same by reducing numbeer of variables.

Value

a booster object with below components.

n_train

Number of cases in the input dataset.

w

Case weights for the final boost.

p

Number of features.

weighted_bootstrap

TRUE if weighted bootstrap applied. Otherwise FALSE.

max_iter

Maximum number of boosting steps.

lambda

The multiplier of model weights.

predictor

Function for prediction

alpha

Model weights.

err_train

A vector of train errors in each step of boosting.

err_test

A vector of test errors in each step of boosting. If there are no test data, it returns NULL

models

Models obtained in each boosting step

x_classes

A list of datasets, which are x_train separated for each class.

n_classes

Number of cases for each class in input dataset.

k_classes

Number of classes in class variable.

bag_frac

Proportion of input dataset used in each boosting step.

class_names

Names of classes in class variable.

Author(s)

Fatih Saglam, fatih.saglam@omu.edu.tr

References

Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of computer and system sciences, 55(1), 119-139.

Hastie, T., Rosset, S., Zhu, J., & Zou, H. (2009). Multi-class AdaBoost. Statistics and its Interface, 2(3), 349-360.

See Also

predict.booster

Examples

require(rbooster)
## n number of cases, p number of variables, k number of classes.
cv_sampler <- function(y, train_proportion) {
 unlist(lapply(unique(y), function(m) sample(which(y==m), round(sum(y==m))*train_proportion)))
}

data_simulation <- function(n, p, k, train_proportion){
 means <- seq(0, k*2.5, length.out = k)
 x <- do.call(rbind, lapply(means,
                            function(m) matrix(data = rnorm(n = round(n/k)*p,
                                                            mean = m,
                                                            sd = 2),
                                               nrow = round(n/k))))
 y <- factor(rep(letters[1:k], each = round(n/k)))
 train_i <- cv_sampler(y, train_proportion)

 data <- data.frame(x, y = y)
 data_train <- data[train_i,]
 data_test <- data[-train_i,]
 return(list(data = data,
             data_train = data_train,
             data_test = data_test))
}
### binary classification
dat <- data_simulation(n = 500, p = 2, k = 2, train_proportion = 0.8)

mm <- booster(x_train = dat$data_train[,1:2],
             y_train = dat$data_train[,3],
             classifier = "rpart",
             method = "discrete",
             x_test = dat$data_test[,1:2],
             y_test = dat$data_test[,3],
             weighted_bootstrap = FALSE,
             max_iter = 100,
             lambda = 1,
             print_detail = TRUE,
             print_plot = TRUE,
             bag_frac = 1,
             p_weak = 2)

## test prediction
mm$test_prediction
## or
pp <- predict(object = mm, newdata = dat$data_test[,1:2], type = "pred")
## test error
tail(mm$err_test, 1)
sum(dat$data_test[,3] != pp)/nrow(dat$data_test)

### multiclass classification
dat <- data_simulation(n = 800, p = 5, k = 3, train_proportion = 0.8)

mm <- booster(x_train = dat$data_train[,1:5],
             y_train = dat$data_train[,6],
             classifier = "rpart",
             method = "real",
             x_test = dat$data_test[,1:5],
             y_test = dat$data_test[,6],
             weighted_bootstrap = FALSE,
             max_iter = 100,
             lambda = 1,
             print_detail = TRUE,
             print_plot = TRUE,
             bag_frac = 1,
             p_weak = 2)

## test prediction
mm$test_prediction
## or
pp <- predict(object = mm, newdata = dat$data_test[,1:5], type = "pred", print_detail = TRUE)
## test error
tail(mm$err_test, 1)
sum(dat$data_test[,6] != pp)/nrow(dat$data_test)

### binary classification, custom classifier
dat <- data_simulation(n = 500, p = 10, k = 2, train_proportion = 0.8)
x <- dat$data[,1:10]
y <- dat$data[,11]

x_train <- dat$data_train[,1:10]
y_train <- dat$data_train[,11]

x_test <- dat$data_test[,1:10]
y_test <- dat$data_test[,11]

## a custom regression classifier function
classifier_lm <- function(x_train, y_train, weights, ...){
 y_train_code <- c(-1,1)
 y_train_coded <- sapply(levels(y_train), function(m) y_train_code[(y_train == m) + 1])
 y_train_coded <- y_train_coded[,1]

 model <- lm.wfit(x = as.matrix(cbind(1,x_train)), y = y_train_coded, w = weights)
 return(list(coefficients = model$coefficients,
             levels = levels(y_train)))
}

## predictor function

predictor_lm <- function(model, x_new, type = "pred", ...) {
 coef <- model$coefficients
 levels <- model$levels

 fit <- as.matrix(cbind(1, x_new))%*%coef
 probs <- 1/(1 + exp(-fit))
 probs <- data.frame(probs, 1 - probs)
 colnames(probs) <- levels

 if (type == "pred") {
   preds <- factor(levels[apply(probs, 1, which.max)], levels = levels, labels = levels)
   return(preds)
 }
 if (type == "prob") {
   return(probs)
 }
}

## real AdaBoost
mm <- booster(x_train = x_train,
             y_train = y_train,
             classifier = classifier_lm,
             predictor = predictor_lm,
             method = "real",
             x_test = x_test,
             y_test = y_test,
             weighted_bootstrap = FALSE,
             max_iter = 50,
             lambda = 1,
             print_detail = TRUE,
             print_plot = TRUE,
             bag_frac = 0.5,
             p_weak = 2)

## test prediction
mm$test_prediction
pp <- predict(object = mm, newdata = x_test, type = "pred", print_detail = TRUE)
## test error
tail(mm$err_test, 1)
sum(y_test != pp)/nrow(x_test)

## discrete AdaBoost
mm <- booster(x_train = x_train,
             y_train = y_train,
             classifier = classifier_lm,
             predictor = predictor_lm,
             method = "discrete",
             x_test = x_test,
             y_test = y_test,
             weighted_bootstrap = FALSE,
             max_iter = 50,
             lambda = 1,
             print_detail = TRUE,
             print_plot = TRUE,
             bag_frac = 0.5,
             p_weak = 2)

## test prediction
mm$test_prediction
pp <- predict(object = mm, newdata = x_test, type = "pred", print_detail = TRUE)
## test error
tail(mm$err_test, 1)
sum(y_test != pp)/nrow(x_test)

# plot function can be used to plot errors
plot(mm)

# more examples are in vignette("booster", package = "rbooster")


[Package rbooster version 1.1.0 Index]