| model_performance {survex} | R Documentation | 
Dataset Level Performance Measures
Description
This function calculates metrics for survival models. The metrics calculated are C/D AUC, Brier score, and their integrated versions, as well as concordance index. It also can calculate ROC curves for specific selected time points.
Usage
model_performance(explainer, ...)
## S3 method for class 'surv_explainer'
model_performance(
  explainer,
  ...,
  type = "metrics",
  metrics = c(`C-index` = c_index, `Integrated C/D AUC` = integrated_cd_auc,
    `Brier score` = brier_score, `Integrated Brier score` = integrated_brier_score,
    `C/D AUC` = cd_auc),
  times = NULL
)
Arguments
explainer | 
 an explainer object - model preprocessed by the   | 
... | 
 other parameters, currently ignored  | 
type | 
 character, either   | 
metrics | 
 a named vector containing the metrics to be calculated. The values should be standardized loss functions. The functions can be supplied manually but has to have these named parameters (  | 
times | 
 a numeric vector of times. If   | 
Value
An object of class "model_performance_survival". It's a list of metric values calculated for the model. It contains:
Harrell's concordance index [1]
Brier score [2, 3]
C/D AUC using the estimator proposed by Uno et. al [4]
integral of the Brier score
integral of the C/D AUC
References
[1] Harrell, F.E., Jr., et al. "Regression modelling strategies for improved prognostic prediction." Statistics in Medicine 3.2 (1984): 143-152.
[2] Brier, Glenn W. "Verification of forecasts expressed in terms of probability." Monthly Weather Review 78.1 (1950): 1-3.
[3] Graf, Erika, et al. "Assessment and comparison of prognostic classification schemes for survival data." Statistics in Medicine 18.17‐18 (1999): 2529-2545.
[4] Uno, Hajime, et al. "Evaluating prediction rules for t-year survivors with censored regression models." Journal of the American Statistical Association 102.478 (2007): 527-537.
Examples
library(survival)
library(survex)
cph <- coxph(Surv(time, status) ~ ., data = veteran, model = TRUE, x = TRUE, y = TRUE)
rsf_ranger <- ranger::ranger(Surv(time, status) ~ .,
    data = veteran,
    respect.unordered.factors = TRUE,
    num.trees = 100,
    mtry = 3,
    max.depth = 5
)
rsf_src <- randomForestSRC::rfsrc(Surv(time, status) ~ .,
    data = veteran
)
cph_exp <- explain(cph)
rsf_ranger_exp <- explain(rsf_ranger,
    data = veteran[, -c(3, 4)],
    y = Surv(veteran$time, veteran$status)
)
rsf_src_exp <- explain(rsf_src)
cph_model_performance <- model_performance(cph_exp)
rsf_ranger_model_performance <- model_performance(rsf_ranger_exp)
rsf_src_model_performance <- model_performance(rsf_src_exp)
print(cph_model_performance)
plot(rsf_ranger_model_performance, cph_model_performance,
    rsf_src_model_performance,
    metrics_type = "scalar"
)
plot(rsf_ranger_model_performance, cph_model_performance, rsf_src_model_performance)
cph_model_performance_roc <- model_performance(cph_exp, type = "roc", times = c(100, 250, 500))
plot(cph_model_performance_roc)