bagged.outliertrees {bagged.outliertrees} | R Documentation |
Bagged OutlierTrees
Description
Fit Bagged OutlierTrees ensemble model to normal data with perhaps some outliers.
Usage
bagged.outliertrees(
df,
ntrees = 100L,
subsampling_rate = 0.25,
max_depth = 4L,
min_gain = 0.01,
z_norm = 2.67,
z_outlier = 8,
pct_outliers = 0.01,
min_size_numeric = 25L,
min_size_categ = 50L,
categ_split = "binarize",
categ_outliers = "tail",
numeric_split = "raw",
cols_ignore = NULL,
follow_all = FALSE,
gain_as_pct = TRUE,
nthreads = parallel::detectCores()
)
Arguments
df |
Data Frame with normal data that might contain some outliers. See details for allowed column types. |
ntrees |
Controls the ensemble size (i.e. the number of OutlierTrees or bootstrapped training sets). A large value is always recommended to build a robust and stable ensemble. Should be decreased if training is taking too much time. |
subsampling_rate |
Sub-sampling rate used for bootstrapping. A small rate results in smaller bootstrapped training sets, which should not suffer from the masking effect. This parameter should be adjusted given the size of the training data (perhaps a smaller value for large training data and conversely). |
max_depth |
Maximum depth of the trees to grow. Can also pass zero, in which case it will only look for outliers with no conditions (i.e. takes each column as a 1-d distribution and looks for outliers in there independently of the values in other columns). |
min_gain |
Minimum gain that a split has to produce in order to consider it (both in terms of looking
for outliers in each branch, and in considering whether to continue branching from them). Note that default
value for GritBot is 1e-6, with |
z_norm |
Maximum Z-value (from standard normal distribution) that can be considered as a normal observation. Note that simply having values above this will not automatically flag observations as outliers, nor does it assume that columns follow normal distributions. Also used for categorical and ordinal columns for building approximate confidence intervals of proportions. |
z_outlier |
Minimum Z-value that can be considered as an outlier. There must be a large gap in the Z-value of the next observation in sorted order to consider it as outlier, given by (z_outlier - z_norm). Decreasing this parameter is likely to result in more observations being flagged as outliers. Ignored for categorical and ordinal columns. |
pct_outliers |
Approximate max percentage of outliers to expect in a given branch. |
min_size_numeric |
Minimum size that branches need to have when splitting a numeric column. In order to look for outliers in a given branch for a numeric column, it must have a minimum of twice this number of observations. |
min_size_categ |
Minimum size that branches need to have when splitting a categorical or ordinal column. In order to look for outliers in a given branch for a categorical, ordinal, or boolean column, it must have a minimum of twice this number of observations. |
categ_split |
How to produce categorical-by-categorical splits. Options are:
|
categ_outliers |
How to look for outliers in categorical variables. Options are:
|
numeric_split |
How to determine the split point in numeric variables. Options are:
This doesn't affect how outliers are determined in the training data passed in |
cols_ignore |
Vector containing columns which will not be split, but will be evaluated for usage
in splitting other columns. Can pass either a logical (boolean) vector with the same number of columns
as |
follow_all |
Whether to continue branching from each split that meets the size and gain criteria. This will produce exponentially many more branches, and if depth is large, might take forever to finish. Will also produce a lot more spurious outiers. Not recommended. |
gain_as_pct |
Whether the minimum gain above should be taken in absolute terms, or as a percentage of
the standard deviation (for numerical columns) or shannon entropy (for categorical columns). Taking it in
absolute terms will prefer making more splits on columns that have a large variance, while taking it as a
percentage might be more restrictive on them and might create deeper trees in some columns. For GritBot
this parameter would always be |
nthreads |
Number of parallel threads to use when fitting the model. |
Value
An object with the fitted model that can be used to detect more outliers in new data.
References
GritBot software: https://www.rulequest.com/gritbot-info.html
Cortes, David. "Explainable outlier detection through decision tree conditioning." arXiv preprint arXiv:2001.00636 (2020).
See Also
predict.bagged.outliertrees print.bagged.outlieroutputs hypothyroid
Examples
library(bagged.outliertrees)
### example dataset with interesting outliers
data(hypothyroid)
### fit a Bagged OutlierTrees model
model <- bagged.outliertrees(hypothyroid,
ntrees = 10,
subsampling_rate = 0.5,
z_outlier = 6,
nthreads = 1
)
### use the fitted model to find outliers in the training dataset
outliers <- predict(model,
newdata = hypothyroid,
min_outlier_score = 0.5,
nthreads = 1
)
### print the top-10 outliers in human-readable format
print(outliers, outliers_print = 10)