xgrove {xgrove} | R Documentation |
Explanation groves
Description
Compute surrogate groves to explain predictive machine learning model and analyze complexity vs. explanatory power.
Usage
xgrove(
model,
data,
ntrees = c(4, 8, 16, 32, 64, 128),
pfun = NULL,
shrink = 1,
b.frac = 1,
seed = 42,
...
)
Arguments
model |
A model with corresponding predict function that returns numeric values. |
data |
Data that must not (!) contain the target variable. |
ntrees |
Sequence of integers: number of boosting trees for rule extraction. |
pfun |
Optional predict function |
shrink |
Sets the |
b.frac |
Sets the |
seed |
Seed for the random number generator to ensure reproducible results (e.g. for the default |
... |
Further arguments to be passed to |
Details
A surrogate grove is trained via gradient boosting using gbm
on data
with the predictions of using of the model
as target variable.
Note that data
must not contain the original target variable! The boosting model is trained using stumps of depth 1.
The resulting interpretation is extracted from pretty.gbm.tree
.
Value
List of the results:
explanation |
Matrix containing tree sizes, rules, explainability |
rules |
Summary of the explanation grove: Rules with identical splits are aggegated. For numeric variables any splits are merge if they lead to identical parititions of the training data |
groves |
Rules of the explanation grove. |
model |
|
Author(s)
References
-
Szepannek, G. and von Holt, B.H. (2023): Can’t see the forest for the trees – analyzing groves to explain random forests, Behaviormetrika, DOI: 10.1007/s41237-023-00205-2.
-
Szepannek, G. and Luebke, K.(2023): How much do we see? On the explainability of partial dependence plots for credit risk scoring, Argumenta Oeconomica 50, DOI: 10.15611/aoe.2023.1.07.
Examples
library(randomForest)
library(pdp)
data(boston)
set.seed(42)
rf <- randomForest(cmedv ~ ., data = boston)
data <- boston[,-3] # remove target variable
ntrees <- c(4,8,16,32,64,128)
xg <- xgrove(rf, data, ntrees)
xg
plot(xg)