gafsControl {caret}R Documentation

Control parameters for GA and SA feature selection

Description

Control the computational nuances of the gafs and safs functions

Many of these options are the same as those described for trainControl. More extensive documentation and examples can be found on the caret website at http://topepo.github.io/caret/feature-selection-using-genetic-algorithms.html#syntax and http://topepo.github.io/caret/feature-selection-using-simulated-annealing.html#syntax.

The functions component contains the information about how the model should be fit and summarized. It also contains the elements needed for the GA and SA modules (e.g. cross-over, etc).

The elements of functions that are the same for GAs and SAs are:

The elements of functions specific to genetic algorithms are:

The elements of functions specific to simulated annealing are:

The pages http://topepo.github.io/caret/feature-selection-using-genetic-algorithms.html and http://topepo.github.io/caret/feature-selection-using-simulated-annealing.html have more details about each of these functions.

holdout can be used to hold out samples for computing the internal fitness value. Note that this is independent of the external resampling step. Suppose 10-fold CV is being used. Within a resampling iteration, holdout can be used to sample an additional proportion of the 90% resampled data to use for estimating fitness. This may not be a good idea unless you have a very large training set and want to avoid an internal resampling procedure to estimate fitness.

The search algorithms can be parallelized in several places:

  1. each externally resampled GA or SA can be run independently (controlled by the allowParallel options)

  2. within a GA, the fitness calculations at a particular generation can be run in parallel over the current set of individuals (see the genParallel)

  3. if inner resampling is used, these can be run in parallel (controls depend on the function used. See, for example, trainControl)

  4. any parallelization of the individual model fits. This is also specific to the modeling function.

It is probably best to pick one of these areas for parallelization and the first is likely to produces the largest decrease in run-time since it is the least likely to incur multiple re-starting of the worker processes. Keep in mind that if multiple levels of parallelization occur, this can effect the number of workers and the amount of memory required exponentially.

Usage

gafsControl(
  functions = NULL,
  method = "repeatedcv",
  metric = NULL,
  maximize = NULL,
  number = ifelse(grepl("cv", method), 10, 25),
  repeats = ifelse(grepl("cv", method), 1, 5),
  verbose = FALSE,
  returnResamp = "final",
  p = 0.75,
  index = NULL,
  indexOut = NULL,
  seeds = NULL,
  holdout = 0,
  genParallel = FALSE,
  allowParallel = TRUE
)

safsControl(
  functions = NULL,
  method = "repeatedcv",
  metric = NULL,
  maximize = NULL,
  number = ifelse(grepl("cv", method), 10, 25),
  repeats = ifelse(grepl("cv", method), 1, 5),
  verbose = FALSE,
  returnResamp = "final",
  p = 0.75,
  index = NULL,
  indexOut = NULL,
  seeds = NULL,
  holdout = 0,
  improve = Inf,
  allowParallel = TRUE
)

Arguments

functions

a list of functions for model fitting, prediction etc (see Details below)

method

The resampling method: boot, boot632, cv, repeatedcv, LOOCV, LGOCV (for repeated training/test splits)

metric

a two-element string that specifies what summary metric will be used to select the optimal number of iterations from the external fitness value and which metric should guide subset selection. If specified, this vector should have names "internal" and "external". See gafs and/or safs for explanations of the difference.

maximize

a two-element logical: should the metrics be maximized or minimized? Like the metric argument, this this vector should have names "internal" and "external".

number

Either the number of folds or number of resampling iterations

repeats

For repeated k-fold cross-validation only: the number of complete sets of folds to compute

verbose

a logical for printing results

returnResamp

A character string indicating how much of the resampled summary metrics should be saved. Values can be “all” or “none”

p

For leave-group out cross-validation: the training percentage

index

a list with elements for each resampling iteration. Each list element is the sample rows used for training at that iteration.

indexOut

a list (the same length as index) that dictates which sample are held-out for each resample. If NULL, then the unique set of samples not contained in index is used.

seeds

a vector or integers that can be used to set the seed during each search. The number of seeds must be equal to the number of resamples plus one.

holdout

the proportion of data in [0, 1) to be held-back from x and y to calculate the internal fitness values

genParallel

if a parallel backend is loaded and available, should gafs use it tp parallelize the fitness calculations within a generation within a resample?

allowParallel

if a parallel backend is loaded and available, should the function use it?

improve

the number of iterations without improvement before safs reverts back to the previous optimal subset

Value

An echo of the parameters specified

Author(s)

Max Kuhn

References

http://topepo.github.io/caret/feature-selection-using-genetic-algorithms.html, http://topepo.github.io/caret/feature-selection-using-simulated-annealing.html

See Also

safs, safs, , caretGA, rfGA, treebagGA, caretSA, rfSA, treebagSA


[Package caret version 6.0-94 Index]