.BT_cv_errors {BT}R Documentation

Cross-validation errors.

Description

Function to compute the cross-validation error.

Usage

.BT_cv_errors(BT_cv_fit, cv.folds, folds)

Arguments

BT_cv_fit

a BTCVFit object.

cv.folds

a numeric corresponding to the number of folds.

folds

a numerical vector containing the different folds.id. Note that if the latter was not defined by the user, those are randomly generated based on the cv.folds input.

Details

This function computes the global cross-validation error as a function of the boosting iteration. Differently said, this measure is obtained by computing the average of out-of-fold errors.

Value

Vector containing the cross-validation errors w.r.t. the boosting iteration.

Author(s)

Gireg Willame gireg.willame@gmail.com

This package is inspired by the gbm3 package. For more details, see https://github.com/gbm-developers/gbm3/.

References

M. Denuit, D. Hainaut and J. Trufin (2019). Effective Statistical Learning Methods for Actuaries |: GLMs and Extensions, Springer Actuarial.

M. Denuit, D. Hainaut and J. Trufin (2019). Effective Statistical Learning Methods for Actuaries ||: Tree-Based Methods and Extensions, Springer Actuarial.

M. Denuit, D. Hainaut and J. Trufin (2019). Effective Statistical Learning Methods for Actuaries |||: Neural Networks and Extensions, Springer Actuarial.

M. Denuit, D. Hainaut and J. Trufin (2022). Response versus gradient boosting trees, GLMs and neural networks under Tweedie loss and log-link. Accepted for publication in Scandinavian Actuarial Journal.

M. Denuit, J. Huyghe and J. Trufin (2022). Boosting cost-complexity pruned trees on Tweedie responses: The ABT machine for insurance ratemaking. Paper submitted for publication.

M. Denuit, J. Trufin and T. Verdebout (2022). Boosting on the responses with Tweedie loss functions. Paper submitted for publication.

See Also

BT.


[Package BT version 0.4 Index]