EstimatorScore-class {adestr} | R Documentation |
Performance scores for point and interval estimators
Description
These classes encode various metrics which can be used to evaluate the performance characteristics of point and interval estimators.
Usage
Expectation()
Bias()
Variance()
MSE()
OverestimationProbability()
Coverage()
SoftCoverage(shrinkage = 1)
Width()
TestAgreement()
Centrality(interval = NULL)
Arguments
shrinkage |
shrinkage factor for bump function. |
interval |
confidence interval with respect to which centrality of a point estimator should be evaluated. |
Value
an object of class EstimatorScore
. This class signals that
an object can be used with the evaluate_estimator
function.
Slots
label
name of the performance score. Used in printing methods.
Details on the implemented estimators
In the following, precise definitions of the performance scores implemented
in adestr
are given. To this end,
let \hat{\mu}
denote a point estimator, (\hat{l}
, \hat{u}
)
an interval estimator, denote the expected value of a random variable
by \mathbb{E}
, the probability of an event by P
,
and let \mu
be the real value of the underlying
parameter to be estimated.
Scores for point estimators (PointEstimatorScore
):
-
Expectation()
:\mathbb{E}[\hat{\mu}]
-
Bias()
:\mathbb{E}[\hat{\mu} - \mu]
-
Variance()
:\mathbb{E}[(\hat{\mu} - \mathbb{E}[\hat{\mu}])^2]
-
MSE()
:\mathbb{E}[(\hat{\mu} - mu)^2]
-
OverestimationProbability()
:P(\hat{\mu} > \mu)
-
Centrality(interval)
:\mathbb{E}[(\hat{\mu} - \hat{l}) + (\hat{\mu} - \hat{u}]
Scores for confidence intervals (IntervalEstimatorScore
):
-
Coverage()
:P(\hat{l} \leq \mu \leq \hat{u})
-
Width()
:\mathbb{E}[\hat{u} - \hat{l}]
-
TestAgreement()
:P\left( \left(\{0 < \hat{l} \text{ and } (c_{1, e} < Z_1 \text{ or } c_{2}(Z_1) < Z_2 ) \right) \text{ or } \left(\{\hat{l} \leq 0 \text{ and } ( Z_1 < c_{1, f} \text{ or } Z_2 \leq c_{2}(Z_1))\}\right)\right)
See Also
Examples
evaluate_estimator(
score = MSE(),
estimator = SampleMean(),
data_distribution = Normal(FALSE),
design = get_example_design(),
mu = c(0, 0.3, 0.6),
sigma = 1,
exact = FALSE
)
evaluate_estimator(
score = Coverage(),
estimator = StagewiseCombinationFunctionOrderingCI(),
data_distribution = Normal(FALSE),
design = get_example_design(),
mu = c(0, 0.3),
sigma = 1,
exact = FALSE
)