tidy_xgboost {autostats}R Documentation

tidy xgboost

Description

Accepts a formula to run an xgboost model. Automatically determines whether the formula is for classification or regression. Returns the xgboost model.

Usage

tidy_xgboost(
  .data,
  formula,
  ...,
  mtry = 1,
  trees = 15L,
  min_n = 1L,
  tree_depth = 6L,
  learn_rate = 0.3,
  loss_reduction = 0,
  sample_size = 1,
  stop_iter = 10L,
  counts = FALSE,
  tree_method = c("auto", "exact", "approx", "hist", "gpu_hist"),
  monotone_constraints = 0L,
  num_parallel_tree = 1L,
  lambda = 1,
  alpha = 0,
  scale_pos_weight = 1,
  verbosity = 0L,
  validate = TRUE
)

Arguments

.data

dataframe

formula

formula

...

additional parameters to be passed to set_engine

mtry

# Randomly Selected Predictors (xgboost: colsample_bynode) (type: numeric, range 0 - 1) (or type: integer if count = TRUE)

trees

# Trees (xgboost: nrounds) (type: integer, default: 15L)

min_n

Minimal Node Size (xgboost: min_child_weight) (type: integer, default: 1L); [typical range: 2-10] Keep small value for highly imbalanced class data where leaf nodes can have smaller size groups. Otherwise increase size to prevent overfitting outliers.

tree_depth

Tree Depth (xgboost: max_depth) (type: integer, default: 6L); Typical values: 3-10

learn_rate

Learning Rate (xgboost: eta) (type: double, default: 0.3); Typical values: 0.01-0.3

loss_reduction

Minimum Loss Reduction (xgboost: gamma) (type: double, default: 0.0); range: 0 to Inf; typical value: 0 - 20 assuming low-mid tree depth

sample_size

Proportion Observations Sampled (xgboost: subsample) (type: double, default: 1.0); Typical values: 0.5 - 1

stop_iter

# Iterations Before Stopping (xgboost: early_stop) (type: integer, default: 15L) only enabled if validation set is provided

counts

if TRUE specify mtry as an integer number of cols. Default FALSE to specify mtry as fraction of cols from 0 to 1

tree_method

xgboost tree_method. default is auto. reference: tree method docs

monotone_constraints

an integer vector with length of the predictor cols, of -1, 1, 0 corresponding to decreasing, increasing, and no constraint respectively for the index of the predictor col. reference: monotonicity docs.

num_parallel_tree

should be set to the size of the forest being trained. default 1L

lambda

[default=1] L2 regularization term on weights. Increasing this value will make model more conservative.

alpha

[default=0] L1 regularization term on weights. Increasing this value will make model more conservative.

scale_pos_weight

[default=1] Control the balance of positive and negative weights, useful for unbalanced classes. if set to TRUE, calculates sum(negative instances) / sum(positive instances)

verbosity

[default=1] Verbosity of printing messages. Valid values are 0 (silent), 1 (warning), 2 (info), 3 (debug).

validate

default TRUE. report accuracy metrics on a validation set.

Details

reference for parameters: xgboost docs

Value

xgb.Booster model

Examples


options(rlang_trace_top_env = rlang::current_env())


# regression on numeric variable

iris %>%
 framecleaner::create_dummies(Species) -> iris_dummy

iris_dummy %>%
 tidy_formula(target= Petal.Length) -> petal_form

iris_dummy %>%
 tidy_xgboost(
   petal_form,
   trees = 500,
   mtry = .5
 )  -> xg1

xg1 %>%
 visualize_model(top_n = 2)

xg1 %>%
 tidy_predict(newdata = iris_dummy, form = petal_form) -> iris_preds

iris_preds %>%
 eval_preds()


# binary classification
# returns probabilty and labels

iris %>%
 tidy_formula(Species) -> species_form

iris %>%
 dplyr::filter(Species != "versicolor") %>%
 dplyr::mutate(Species = forcats::fct_drop(Species)) -> iris_binary

iris_binary %>%
 tidy_xgboost(formula = species_form, trees = 50L, mtry = 0.2) -> xgb_bin

xgb_bin %>%
 tidy_predict(newdata = iris_binary, form = species_form) -> iris_binary1

iris_binary1 %>%
 eval_preds()


# multiclass classification that returns labels




iris %>%
 tidy_xgboost(species_form,
              objective = "multi:softmax",
              trees = 100,
              tree_depth = 3L,
              loss_reduction = 0.5) -> xgb2



xgb2 %>%
 tidy_predict(newdata = iris, form = species_form) -> iris_preds

# additional yardstick metrics can be supplied to the dots in eval_preds

iris_preds %>%
 eval_preds(yardstick::j_index)


# multiclass classification that returns probabilities


iris %>%
 tidy_xgboost(species_form,
              objective = "multi:softprob",
              trees = 50L,
              sample_size = .2,
              mtry = .5,
              tree_depth = 2L,
              loss_reduction = 3) -> xgb2_prob

# predict on the data that already has the class labels, so the resulting data frame
# has class and prob predictions

xgb2_prob %>%
 tidy_predict(newdata = iris_preds, form = species_form) -> iris_preds1

# also requires the labels in the dataframe to evaluate preds
# the model name must be supplied as well. Then roc metrics can be calculated
#iris_preds1 %>%
#  eval_preds( yardstick::average_precision, softprob_model = "xgb2_prob"
#  )



[Package autostats version 0.3.0 Index]