print.fairness_object {fairmodels} | R Documentation |
Print Fairness Object
Description
Print Fairness Object
Usage
## S3 method for class 'fairness_object'
print(
x,
...,
colorize = TRUE,
fairness_metrics = c("ACC", "TPR", "PPV", "FPR", "STP"),
fair_level = NULL,
border_width = 1,
loss_aggregating_function = NULL
)
Arguments
x |
|
... |
other parameters |
colorize |
logical, whether information about metrics should be in color or not |
fairness_metrics |
character, vector of metrics. Subset of fairness metrics to be used. The full set is defined as c("ACC", "TPR", "PPV", "FPR", "STP"). |
fair_level |
numerical, amount of fairness metrics that need do be passed in order to call a model fair. Default is 5. |
border_width |
numerical, width of border between fair and unfair models.
If |
loss_aggregating_function |
function, loss aggregating function that may be provided. It takes metric scores as vector and aggregates them to one value. The default is 'Total loss' that measures the total sum of distances to 1. It may be interpreted as sum of bar heights in fairness_check. |
Examples
data("german")
y_numeric <- as.numeric(german$Risk) - 1
lm_model <- glm(Risk ~ .,
data = german,
family = binomial(link = "logit")
)
rf_model <- ranger::ranger(Risk ~ .,
data = german,
probability = TRUE,
max.depth = 3,
num.trees = 100,
seed = 1,
num.threads = 1
)
explainer_lm <- DALEX::explain(lm_model, data = german[, -1], y = y_numeric)
explainer_rf <- DALEX::explain(rf_model,
data = german[, -1],
y = y_numeric
)
fobject <- fairness_check(explainer_lm, explainer_rf,
protected = german$Sex,
privileged = "male"
)
print(fobject)
# custom print
print(fobject,
fairness_metrics = c("ACC", "TPR"), # amount of metrics to be printed
border_width = 0, # in our case 2/2 will be printed in green and 1/2 in red
loss_aggregating_function = function(x) sum(abs(x)) + 10
) # custom loss function - takes vector