Cond.KL.Weib.Gamma {RSizeBiased} | R Documentation |
Kullback-Leibler divergence between the (parametrized with respect to shape and mean or variance) of the Weibull or gamma distribution and its (assumed) maximum likelihood estimates.
Description
The function returns the Kullback-Leibler divergence (minus a constant) between the (parametrized with respect to shape and mean or variance) underlying Weibull or gamma distribution and its (assumed) maximum likelihood estimates.
Usage
Cond.KL.Weib.Gamma(par,nullvalue,hata,hatb,type,dist)
Arguments
par |
The (actual) shape parameter |
nullvalue |
The (actual) distribution mean or variance. |
hata |
Maximum likelihood estimate of the shape parameter of the distribution. |
hatb |
Maximum likelihood estimate of the scale parameter of the distribution. |
type |
Numeric switch, enables the choice of mean or variance: type: 1 for mean, 2 (or any other value != 1) for variance. |
dist |
Character switch, enables the choice of distribution: type "weib" for the Weibull or "gamma" for the gamma distribution. |
Details
The Kullback-Leibler divergence between the Weibull(\alpha, \beta)
or the gamma(\alpha, \beta)
and its maximum likelihood estimate Gamma(\hat \alpha, \hat \beta)
is given by
D_{KL} = (\hat \alpha -1)\Psi(\hat \alpha) - \log\hat \beta - \hat \alpha - \log \Gamma(\hat \alpha) + \log\Gamma( \alpha) + \alpha \log \beta - (\alpha -1)(\Psi(\hat \alpha) + \log \hat \beta) + \frac{ \hat \beta \hat \alpha}{\lambda}.
Since D_{KL}
is used to determine the closest distribution - given its mean or variance - to the estimated gamma p.d.f., the first four terms are omitted from the function outcome, i.e. the function returns the result of the following quantity:
\log\Gamma( \alpha) + \alpha \log \beta - (\alpha -1)(\Psi(\hat \alpha) + \log \hat \beta) + \frac{ \hat \beta \hat \alpha}{\lambda}.
For the Weibull distribution the corresponding formulas are
D_{KL} = \log \frac{\hat \alpha}{{\hat \beta}^{\hat \alpha}} - \log \frac{\alpha}{{\beta}^{\alpha}} + (\hat \alpha - \alpha) \left ( \log \hat \beta - \frac{\gamma}{\hat \alpha} \right ) + \left (\frac{\hat \beta}{\beta} \right )^\alpha \Gamma\left ( \frac{\alpha}{\hat \alpha} +1 \right ) -1
and since D_{KL}
is used to determine the closest distribution - given its mean or variance - to the estimated gamma p.d.f., the first term is omitted from the function outcome, i.e. the function returns the result of the following quantity:
- \log \frac{\alpha}{{\beta}^{\alpha}} + (\hat \alpha - \alpha) \left ( \log \hat \beta - \frac{\gamma}{\hat \alpha} \right ) + \left (\frac{\hat \beta}{\beta} \right )^\alpha \Gamma\left ( \frac{\alpha}{\hat \alpha} +1 \right ) -1
Value
A scalar, the value of the Kullback-Leibler divergence (minus a constant).
Author(s)
Polychronis Economou
R implementation and documentation: Polychronis Economou <peconom@upatras.gr>
References
Economou et. al. (2021). Hypothesis testing for the population mean and variance based on r-size biased samples, under review.
Examples
#K-L divergence for the Gamma distribution for shape=2
#and variance=3 and their assumed MLE=(1,1):
Cond.KL.Weib.Gamma(2,3,1,1,2, "gamma")
#K-L divergence for the Weibull distribution for shape=2
#and variance=3 and their assumed MLE=(1,1):
Cond.KL.Weib.Gamma(2,3,1,1,2, "weib")