CalibrationCurves {CalibrationCurves}R Documentation

General information on the package and its functions

Description

Using this package, you can assess the calibration performance of your prediction model. That is, to which extent the predictions and correspond with what we observe empirically. To assess the calibration of model with a binary outcome, you can use the val.prob.ci.2 or the valProbggplot function. If the outcome of your prediction model is not binary but follows a different distribution of the exponential family, you can employ the genCalCurve function.

If you are not familiar with the theory and/or application of calibration, you can consult the vignette of the package. This vignette provides a comprehensive overview of the theory and contains a tutorial with some practical examples. Further, we suggest the reader to consult the paper on generalized calibration curves on arXiv. In this paper, we provide the theoretical background on the generalized calibration framework and illustrate its applicability with some prototypical examples of both statistical and machine learning prediction models that are well-calibrated, overfit and underfit.

Originally, the package only contained functions to assess the calibration of prediction models with a binary outcome. The details section provides some background information on the history of the package's development.

Details

Some years ago, Yvonne Vergouwe and Ewout Steyerberg adapted the function val.prob from the rms-package (https://cran.r-project.org/package=rms) into val.prob.ci and added the following functions to val.prob:

In December 2015, Bavo De Cock, Daan Nieboer, and Ben Van Calster adapted this to val.prob.ci.2:

In 2023, Bavo De Cock (Campo) published a paper that introduces the generalized calibration framework. This framework is an extension of the logistic calibration framework to prediction models where the outcome's distribution is a member of the exponential family. As such, we are able to assess the calibration of a wider range of prediction models. The methods in this paper are implemented in the CalibrationCurves package.

The most current version of this package can always be found on https://github.com/BavoDC and can easily be installed using the following code:
install.packages("devtools") # if not yet installed
require(devtools)
install_github("BavoDC/CalibrationCurves", dependencies = TRUE, build_vignettes = TRUE)

References

De Cock Campo, B. (2023). Towards reliable predictive analytics: a generalized calibration framework. arXiv:2309.08559, available at https://arxiv.org/abs/2309.08559.

Steyerberg, E.W.Van Calster, B., Pencina, M.J. (2011). Performance measures for prediction models and markers : evaluation of predictions and classifications. Revista Espanola de Cardiologia, 64(9), pp. 788-794

Van Calster, B., Nieboer, D., Vergouwe, Y., De Cock, B., Pencina M., Steyerberg E.W. (2016). A calibration hierarchy for risk models was defined: from utopia to empirical data. Journal of Clinical Epidemiology, 74, pp. 167-176

Van Hoorde, K., Van Huffel, S., Timmerman, D., Bourne, T., Van Calster, B. (2015). A spline-based tool to assess and visualize the calibration of multiclass risk predictions. Journal of Biomedical Informatics, 54, pp. 283-93


[Package CalibrationCurves version 2.0.1 Index]