calc_riskRatio_gev {climextRemes} | R Documentation |
Compute risk ratio and uncertainty based on generalized extreme value model fit to block maxima or minima
Description
Compute risk ratio and uncertainty by fitting generalized extreme value model, designed specifically for climate data, to exceedance-only data, using the point process approach. The risk ratio is the ratio of the probability of exceedance of a pre-specified value under the model fit to the first dataset to the probability under the model fit to the second dataset. Default standard errors are based on the usual MLE asymptotics using a delta-method-based approximation, but standard errors based on the nonparametric bootstrap and on a likelihood ratio procedure can also be computed.
Usage
calc_riskRatio_gev(
returnValue,
y1,
y2,
x1 = NULL,
x2 = NULL,
locationFun1 = NULL,
locationFun2 = NULL,
scaleFun1 = NULL,
scaleFun2 = NULL,
shapeFun1 = NULL,
shapeFun2 = NULL,
nReplicates1 = 1,
nReplicates2 = 1,
replicateIndex1 = NULL,
replicateIndex2 = NULL,
weights1 = NULL,
weights2 = NULL,
xNew1 = NULL,
xNew2 = NULL,
maxes = TRUE,
scaling1 = 1,
scaling2 = 1,
ciLevel = 0.9,
ciType,
bootSE,
bootControl = NULL,
lrtControl = NULL,
optimArgs = NULL,
optimControl = NULL,
initial1 = NULL,
initial2 = NULL,
logScale1 = NULL,
logScale2 = NULL,
getReturnCalcs = FALSE,
getParams = FALSE,
getFit = FALSE
)
Arguments
returnValue |
numeric value giving the value for which the risk ratio should be calculated, where the resulting period will be the average number of blocks until the value is exceeded and the probability the probability of exceeding the value in any single block. |
y1 |
a numeric vector of observed maxima or minima values for the first dataset. See |
y2 |
a numeric vector of observed maxima or minima values for the second dataset. Analogous to |
x1 |
a data frame, or object that can be converted to a data frame with columns corresponding to covariate/predictor/feature variables and each row containing the values of the variable for the corresponding observed maximum/minimum. The number of rows should either equal the length of |
x2 |
analogous to |
locationFun1 |
formula, vector of character strings, or indices describing a linear model (i.e., regression function) for the location parameter using columns from |
locationFun2 |
formula, vector of character strings, or indices describing a linear model (i.e., regression function) for the location parameter using columns from |
scaleFun1 |
formula, vector of character strings, or indices describing a linear model (i.e., regression function) for the (potentially transformed) scale parameter using columns from |
scaleFun2 |
formula, vector of character strings, or indices describing a linear model (i.e., regression function) for the (potentially transformed) scale parameter using columns from |
shapeFun1 |
formula, vector of character strings, or indices describing a linear model (i.e., regression function) for the shape parameter using columns from |
shapeFun2 |
formula, vector of character strings, or indices describing a linear model (i.e., regression function) for the shape parameter using columns from |
nReplicates1 |
numeric value indicating the number of replicates for the first dataset. |
nReplicates2 |
numeric value indicating the number of replicates for the second dataset. |
replicateIndex1 |
numeric vector providing the index of the replicate corresponding to each element of |
replicateIndex2 |
numeric vector providing the index of the replicate corresponding to each element of |
weights1 |
a vector providing the weights for each observation in the first dataset. When there is only one replicate or the weights do not vary by replicate, a vector of length equal to the number of observations. When weights vary by replicate, this should be of equal length to |
weights2 |
a vector providing the weights for each observation in the second dataset. Analogous to |
xNew1 |
object of the same form as |
xNew2 |
object of the same form as |
maxes |
logical indicating whether analysis is for block maxima (TRUE) or block minima (FALSE); in the latter case, the function works with the negative of the values, changing the sign of the resulting location parameters |
scaling1 |
positive-valued scalar used to scale the data values of the first dataset for more robust optimization performance. When multiplied by the values, it should produce values with magnitude around 1. |
scaling2 |
positive-valued scalar used to scale the data values of the second dataset for more robust optimization performance. When multiplied by the values, it should produce values with magnitude around 1. |
ciLevel |
statistical confidence level for confidence intervals; in repeated experimentation, this proportion of confidence intervals should contain the true risk ratio. Note that if only one endpoint of the resulting interval is used, for example the lower bound, then the effective confidence level increases by half of one minus |
ciType |
character vector indicating which type of confidence intervals to compute. See |
bootSE |
logical indicating whether to use the bootstrap to estimate the standard error of the risk ratio |
bootControl |
a list of control parameters for the bootstrapping. See |
lrtControl |
list containing a single component, |
optimArgs |
a list with named components matching exactly any arguments that the user wishes to pass to |
optimControl |
a list with named components matching exactly any elements that the user wishes to pass as the |
initial1 |
a list with components named |
initial2 |
a list with components named |
logScale1 |
logical indicating whether optimization for the scale parameter should be done on the log scale for the first dataset. By default this is FALSE when the scale is not a function of covariates and TRUE when the scale is a function of covariates (to ensure the scale is positive regardless of the regression coefficients). |
logScale2 |
logical indicating whether optimization for the scale parameter should be done on the log scale for the second dataset. By default this is FALSE when the scale is not a function of covariates and TRUE when the scale is a function of covariates (to ensure the scale is positive regardless of the regression coefficients). |
getReturnCalcs |
logical indicating whether to return the estimated return values/probabilities/periods from the fitted models. |
getParams |
logical indicating whether to return the fitted parameter values and their standard errors for the fitted models; WARNING: parameter values for models with covariates for the scale parameter must interpreted based on the value of |
getFit |
logical indicating whether to return the full fitted models (potentially useful for model evaluation and for understanding optimization problems); note that estimated parameters in the fit object for nonstationary models will not generally match the MLE provided when |
Details
See fit_gev
for more details on fitting the block maxima model for each dataset, including details on blocking and replication. Also see fit_gev
for information on the bootControl
argument.
Optimization failures:
It is not uncommon for maximization of the log-likelihood to fail for extreme value models. Please see the help information for fit_gev
. Also note that if the probability in the denominator of the risk ratio is near one, one may achieve better numerical performance by swapping the two datasets and computing the risk ratio for the probability under dataset 2 relative to the probability under dataset 1.
ciType
can include one or more of the following: 'delta'
, 'lrt'
, 'boot_norm'
, 'boot_perc'
, 'boot_basic'
, 'boot_stud'
, 'boot_bca'
. 'delta'
uses the delta method to compute an asymptotic interval based on the standard error of the log risk ratio. 'lrt'
inverts a likelihood-ratio test. Bootstrap-based options are the normal-based interval using the bootstrap standard error ('boot_norm'
), the percentile bootstrap ('boot_perc'
), the basic bootstrap ('boot_basic'
), the bootstrap-t ('boot_stud'
), and the bootstrap BCA method ('boot_bca'
). See Paciorek et al. for more details.
See fit_pot
for information on the bootControl
argument.
Value
The primary outputs of this function are as follows: the log of the risk ratio and standard error of that log risk ratio (logRiskRatio
and se_logRiskRatio
) as well the risk ratio itself (riskRatio
). The standard error is based on the usual MLE asymptotics using a delta-method-based approximation. If requested via ciType
, confidence intervals will be returned, as discussed in Details
.
Author(s)
Christopher J. Paciorek
References
Paciorek, C.J., D.A. Stone, and M.F. Wehner. 2018. Quantifying uncertainty in the attribution of human influence on severe weather. Weather and Climate Extremes 20:69-80. arXiv preprint <https://arxiv.org/abs/1706.03388>.
Jeon S., C.J. Paciorek, and M.F. Wehner. 2016. Quantile-based bias correction and uncertainty quantification of extreme event attribution statements. Weather and Climate Extremes 12: 24-32. <DOI:10.1016/j.wace.2016.02.001>. arXiv preprint: <http://arxiv.org/abs/1602.04139>.
Examples
data(Fort, package = 'extRemes')
FortMax <- aggregate(Prec ~ year, data = Fort, max)
earlyYears <- 1900:1929
lateYears <- 1970:1999
earlyPeriod <- which(FortMax$year %in% earlyYears)
latePeriod <- which(FortMax$year %in% lateYears)
# contrast late period with early period, assuming a nonstationary fit
# within each time period and finding RR based on midpoint of each period
## Not run:
out <- calc_riskRatio_gev(returnValue = 3,
y1 = FortMax$Prec[earlyPeriod], y2 = FortMax$Prec[latePeriod],
x1 = data.frame(years = earlyYears), x2 = data.frame(years = lateYears),
locationFun1 = ~years, locationFun2 = ~years,
xNew1 = data.frame(years = mean(earlyYears)),
xNew2 = data.frame(years = mean(lateYears)))
## End(Not run)