gpb.importance {gpboost} | R Documentation |
Compute feature importance in a model
Description
Creates a data.table
of feature importances in a model.
Usage
gpb.importance(model, percentage = TRUE)
Arguments
model |
object of class |
percentage |
whether to show importance in relative percentage. |
Value
For a tree model, a data.table
with the following columns:
Feature
: Feature names in the model.Gain
: The total gain of this feature's splits.Cover
: The number of observation related to this feature.Frequency
: The number of times a feature splited in trees.
Examples
data(agaricus.train, package = "gpboost")
train <- agaricus.train
dtrain <- gpb.Dataset(train$data, label = train$label)
params <- list(
objective = "binary"
, learning_rate = 0.1
, max_depth = -1L
, min_data_in_leaf = 1L
, min_sum_hessian_in_leaf = 1.0
)
model <- gpb.train(
params = params
, data = dtrain
, nrounds = 5L
)
tree_imp1 <- gpb.importance(model, percentage = TRUE)
tree_imp2 <- gpb.importance(model, percentage = FALSE)
[Package gpboost version 1.5.1.1 Index]