evaluateAlgorithmResults {RIbench}R Documentation

Convenience Function to generate all result plots and calculate the benchmark score

Description

Convenience Function to generate all result plots and calculate the benchmark score

Usage

evaluateAlgorithmResults(
  workingDir = "",
  algoNames = NULL,
  subset = "all",
  evalFolder = "Evaluation",
  withDirect = TRUE,
  withMean = TRUE,
  outline = TRUE,
  errorParam = c("zzDevAbs_Ov", "AbsPercError_Ov", "AbsError_Ov"),
  cutoffZ = 5,
  cols = NULL,
  ...
)

Arguments

workingDir

(character) specifying the working directory: Plots will be stored in 'workingDir/evalFolder' and results will be used from 'workingDir/Results/algoName/biomarker'

algoNames

(character) vector specifying all algorithms that should be part of the evaluation

subset

(character, numeric, or data.frame) to specify for which subset the algorithm should be evaluated. character options: 'all' (default) for all test sets, a distribution type: 'normal', 'skewed', 'heavilySkewed', 'shifted'; a biomarker: 'Hb', 'Ca', 'FT4', 'AST', 'LACT', 'GGT', 'TSH', 'IgE', 'CRP', 'LDH'; 'Runtime' for runtime analysis subset; numeric option: number of test sets per biomarker, e.g. 10; data.frame: customized subset of table with test set specifications

evalFolder

(character) specifying the name of the ouptut directory, Plots will be stored in workingDir/evalFolder, default: 'Evaluation'

withDirect

(logical) indicating whether the direct method should be simulated for comparison (default:TRUE)

withMean

(logical) indicating whether the mean should be plotted as well (default: TRUE)

outline

(logical) indicating whether outliers should be drawn (TRUE, default), or not (FALSE)

errorParam

(character) specifying for which error parameter the data frame should be generated, choose between absolute z-score deviation ("zzDevAbs_Ov"), absolute percentage error ("AbsPercError_Ov"), and absolute error ("AbsError_Ov")

cutoffZ

(integer) specifying if and if so which cutoff for the absolute z-score deviation should be used to classify results as implausible and exclude them from the overall benchmark score (default: 5)

cols

(character) vector specifying the colors used for the different algorithms

...

additional arguments to be passed to the method, e.g. "truncNormal" (logical) vector specifying if a normal distribution truncated at zero shall be assumed, can be either TRUE/FALSE or a vector with TRUE/FALSE for each algorithm; "colDirect" (character) specifying the color used for the direct method, default: "grey" "ylab" (character) specifying the label for the y-axis

Value

(data frame) containing the computed benchmark results

Author(s)

Tatjana Ammer tatjana.ammer@roche.com

Examples


## Not run: 
# Ensure that 'generateBiomarkerTestSets()' and 'evaluateBiomarkerTestSets() is called 
# with the same workingDir and for all mentioned algorithms before calling this function.

# first example, evaluation for several algorithms 
benchmarkScore <- evaluateAlgorithmResults(workingDir=tempdir(), 
			algoNames=c("Hoffmann", "TML", "kosmic", "TMC", "refineR"))
# The function will create several plots saved in workingDir/Evaluation.

# second example, evaluation for only one algorithm and a defined subset
benchmarkScore <- evaluateAlgorithmResults(workingDir = tempdir(), 
			algoNames = "refineR", subset = 'Ca')

# third example, saving the results in a different folder, and setting a different cutoff
# for the absolute z-score deviation
benchmarkScore <- evaluateAlgorithmResults(workingDir = tempdir(), algoNames = "refineR", 
		subset = 'Ca', cutoffZ = 4, evalFolder = "Eval_Test")

## End(Not run)
 


[Package RIbench version 1.0.2 Index]