compare_performance {alookr} | R Documentation |
Compare model performance
Description
compare_performance() compares the performance of a model with several model performance metrics.
Usage
compare_performance(model)
Arguments
model |
A model_df. results of predicted model that created by run_predict(). |
Value
list. results of compared model performance. list has the following components:
recommend_model : character. The name of the model that is recommended as the best among the various models.
top_count : numeric. The number of best performing performance metrics by model.
mean_rank : numeric. Average of ranking individual performance metrics by model.
top_metric : list. The name of the performance metric with the best performance on individual performance metrics by model.
The performance metrics calculated are as follows.:
ZeroOneLoss : Normalized Zero-One Loss(Classification Error Loss).
Accuracy : Accuracy.
Precision : Precision.
Recall : Recall.
Specificity : Specificity.
F1_Score : F1 Score.
LogLoss : Log loss / Cross-Entropy Loss.
AUC : Area Under the Receiver Operating Characteristic Curve (ROC AUC).
Gini : Gini Coefficient.
PRAUC : Area Under the Precision-Recall Curve (PR AUC).
LiftAUC : Area Under the Lift Chart.
GainAUC : Area Under the Gain Chart.
KS_Stat : Kolmogorov-Smirnov Statistic.
Examples
library(dplyr)
# Divide the train data set and the test data set.
sb <- rpart::kyphosis %>%
split_by(Kyphosis)
# Extract the train data set from original data set.
train <- sb %>%
extract_set(set = "train")
# Extract the test data set from original data set.
test <- sb %>%
extract_set(set = "test")
# Sampling for unbalanced data set using SMOTE(synthetic minority over-sampling technique).
train <- sb %>%
sampling_target(seed = 1234L, method = "ubSMOTE")
# Cleaning the set.
train <- train %>%
cleanse
# Run the model fitting.
result <- run_models(.data = train, target = "Kyphosis", positive = "present")
# Predict the model.
pred <- run_predict(result, test)
# Compare the model performance
compare_performance(pred)