scores.compare.benchmarks {amber}R Documentation

Compares model scores against scores reference scores.

Description

Interpreting scores is challenging as reference data are subject to uncertainty. This function compares two types of scores. The first set of scores expresses model performance and is based on comparing model output against reference data. The second set of scores is based on a comparison of two independent reference data (e.g. remotely sensed GPP against FLUXNET). The difference between both scores reflect the uncertainty of reference data and indicates how well a model could perform given this uncertainty. Scores that are based on reference data only are here referred to as benchmark scores.

Usage

scores.compare.benchmarks(bench.path, model.path, model.id,
  plot.width = 8, plot.height = 4.8, outputDir = FALSE,
  defineVariableOrder = FALSE, myVariables = myVariables)

Arguments

bench.path

A string that gives the path where benchmarks (i.e. reference vs. reference data) are stored

model.path

A string that gives the path where the output from scores.tables is stored (model)

model.id

A string that gives the name of a model, e.g. 'CLASSIC'

plot.width

Number that gives the plot width, e.g. 7.3

plot.height

Number that gives the plot height, e.g. 6.5

outputDir

A string that gives the output directory, e.g. '/home/project/study'. The output will only be written if the user specifies an output directory.

defineVariableOrder

Logical. If TRUE, variables are sorted according to the parameter myVariables defined below. Default setting is FALSE.

myVariables

An R object that defines the variables and their order in the score table, e.g. c('GPP', 'RECO', 'NEE').

Value

A figure in PDF format that shows the (a) benchmark skill score, (b) model skill score, and (c) score difference.

Examples

library(amber)
library(classInt)
library(doParallel)
library(foreach)
library(Hmisc)
library(latex2exp)
library(ncdf4)
library(parallel)
library(raster)
library(rgdal)
library(rgeos)
library(scico)
library(sp)
library(stats)
library(utils)
library(viridis)
library(xtable)

bench.path <- system.file('extdata/scoresBenchmarks', package = 'amber')
model.path <- system.file('extdata/scores', package = 'amber')

model.id <- 'CLASSIC'
myVariables <- c('ALBS', 'GPP', 'LAI')

scores.compare.benchmarks(bench.path, model.path, model.id, plot.width = 8, plot.height = 4,
defineVariableOrder = TRUE, myVariables = myVariables)


[Package amber version 1.0.3 Index]