cv {utiml} | R Documentation |
Multi-label cross-validation
Description
Perform the cross validation procedure for multi-label learning.
Usage
cv(
mdata,
method,
...,
cv.folds = 10,
cv.sampling = c("random", "iterative", "stratified"),
cv.results = FALSE,
cv.predictions = FALSE,
cv.measures = "all",
cv.cores = getOption("utiml.cores", 1),
cv.seed = getOption("utiml.seed", NA)
)
Arguments
mdata |
A mldr dataset. |
method |
The multi-label classification method. It also accepts the name of the method as a string. |
... |
Additional parameters required by the method. |
cv.folds |
Number of folds. (Default: 10) |
cv.sampling |
The method to split the data. The default methods are:
(Default: "random") |
cv.results |
Logical value indicating if the folds results should be reported (Default: FALSE). |
cv.predictions |
Logical value indicating if the predictions should be reported (Default: FALSE). |
cv.measures |
The measures names to be computed. Call
|
cv.cores |
The number of cores to parallelize the cross validation
procedure. (Default: |
cv.seed |
An optional integer used to set the seed. (Default:
|
Value
If cv.results and cv.prediction are FALSE, the return is a vector with the expected multi-label measures, otherwise, a list contained the multi-label and the other expected results (the label measures and/or the prediction object) for each fold.
See Also
Other evaluation:
multilabel_confusion_matrix()
,
multilabel_evaluate()
,
multilabel_measures()
Examples
#Run 10 folds for BR method
res1 <- cv(toyml, br, base.algorithm="RANDOM", cv.folds=10)
#Run 3 folds for RAkEL method and get the fold results and the prediction
res2 <- cv(mdata=toyml, method="rakel", base.algorithm="RANDOM", k=2, m=10,
cv.folds=3, cv.results=TRUE, cv.predictions=TRUE)