CalibrationCurves {CalibrationCurves} | R Documentation |
Some years ago, Yvonne Vergouwe and Ewout Steyerberg adapted the function val.prob
from the rms-package (https://cran.r-project.org/package=rms) into val.prob.ci
and added the following functions to val.prob
:
Scaled Brier score by relating to max for average calibrated Null model
Risk distribution according to outcome
0 and 1 to indicate outcome label; set with d1lab=".."
, d0lab=".."
Labels: y axis: "Observed Frequency"; Triangle: "Grouped observations"
Confidence intervals around triangles
A cut-off can be plotted; set x coordinate
In December 2015, Bavo De Cock, Daan Nieboer, and Ben Van Calster adapted
this to val.prob.ci.2
:
Flexible calibration curves can be obtained using loess (default) or restricted cubic splines, with pointwise 95% confidence intervals. Flexible calibration curves are now given by default and this decision is based on the findings reported in Van Calster et al. (2016).
Loess: confidence intervals can be obtained in closed form or using bootstrapping (CL.BT=T will do bootstrapping with 2000 bootstrap samples, however this will take a while)
RCS: 3 to 5 knots can be used
the knot locations will be estimated using default quantiles of
x (by rcspline.eval
, see rcspline.plot
and rcspline.eval
)
if estimation problems occur at the specified number of knots (nr.knots, default is 5), the analysis is repeated with nr.knots-1 until the problem has disappeared and the function stops if there is still an estimation problem with 3 knots
You can now adjust the plot through use of normal plot commands
(cex.axis
etcetera), and the size of the legend now has to be specified in
cex.leg
Label y-axis: "Observed proportion"
Stats: added the Estimated Calibration Index (ECI), a statistical measure to quantify lack of calibration (Van Hoorde et al., 2015)
Stats to be shown in the plot: by default we show the "abc"
of model performance (Steyerberg et al., 2011). That is, calibration intercept (calibration-in-the-large), calibration slope and c-
statistic. Alternatively, the user can select the statistics of
choice (e.g. dostats=c("C (ROC)","R2")
or dostats=c(2,3)
.
Vectors p, y and logit no longer have to be sorted
Since then, several new features have been added and are still being added. The most current version of this package can always be found on
https://github.com/BavoDC and can easily be installed using the following code:
install.packages("devtools") # if not yet installed
require(devtools)
install_git("https://github.com/BavoDC/CalibrationCurves")
Steyerberg, E.W.Van Calster, B., Pencina, M.J. (2011). Performance measures for prediction models and markers : evaluation of predictions and classifications. Revista Espanola de Cardiologia, 64(9), pp. 788-794
Van Calster, B., Nieboer, D., Vergouwe, Y., De Cock, B., Pencina M., Steyerberg E.W. (2016). A calibration hierarchy for risk models was defined: from utopia to empirical data. Journal of Clinical Epidemiology, 74, pp. 167-176
Van Hoorde, K., Van Huffel, S., Timmerman, D., Bourne, T., Van Calster, B. (2015). A spline-based tool to assess and visualize the calibration of multiclass risk predictions. Journal of Biomedical Informatics, 54, pp. 283-93