kldcauchy {mcauchyd} | R Documentation |
Kullback-Leibler Divergence between Centered Multivariate Cauchy Distributions
Description
Computes the Kullback-Leibler divergence between two random vectors distributed according to multivariate Cauchy distributions (MCD) with zero location vector.
Usage
kldcauchy(Sigma1, Sigma2, eps = 1e-06)
Arguments
Sigma1 |
symmetric, positive-definite matrix. The scatter matrix of the first distribution. |
Sigma2 |
symmetric, positive-definite matrix. The scatter matrix of the second distribution. |
eps |
numeric. Precision for the computation of the partial derivative of the Lauricella |
Details
Given X_1
, a random vector of \mathbb{R}^p
distributed according to the MCD
with parameters (0, \Sigma_1)
and X_2
, a random vector of \mathbb{R}^p
distributed according to the MCD
with parameters (0, \Sigma_2)
.
Let \lambda_1, \dots, \lambda_p
the eigenvalues of the square matrix \Sigma_1 \Sigma_2^{-1}
sorted in increasing order:
\lambda_1 < \dots < \lambda_{p-1} < \lambda_p
Depending on the values of these eigenvalues,
the computation of the Kullback-Leibler divergence of X_1
from X_2
is given by:
if
\lambda_1 < 1
and\lambda_p > 1
:
\displaystyle{ KL(X_1||X_2) = -\frac{1}{2} \ln{ \prod_{i=1}^p{\lambda_i}} + \frac{1+p}{2} \bigg( \ln{\lambda_p} }
\displaystyle{ - \frac{\partial}{\partial a} \bigg\{ F_D^{(p)} \bigg( a, \underbrace{\frac{1}{2}, \dots, \frac{1}{2}, a + \frac{1}{2}}_p ; a + \frac{1+p}{2} ; 1 - \frac{\lambda_1}{\lambda_p}, \dots, 1 - \frac{\lambda_{p-1}}{\lambda_p}, 1 - \frac{1}{\lambda_p} \bigg) \bigg\}\bigg|_{a=0} \bigg) }
if
\lambda_p < 1
:
\displaystyle{ KL(X_1||X_2) = -\frac{1}{2} \ln{ \prod_{i=1}^p{\lambda_i}} }
\displaystyle{ - \frac{1+p}{2} \frac{\partial}{\partial a} \bigg\{ F_D^{(p)} \bigg( a, \underbrace{\frac{1}{2}, \dots, \frac{1}{2}}_p ; a + \frac{1+p}{2} ; 1 - \lambda_1, \dots, 1 - \lambda_p \bigg) \bigg\}\bigg|_{a=0} }
if
\lambda_1 > 1
:
\displaystyle{ KL(X_1||X_2) = -\frac{1}{2} \ln{ \prod_{i=1}^p{\lambda_i}} - \frac{1+p}{2} \prod_{i=1}^p\frac{1}{\sqrt{\lambda_i}} }
\displaystyle{ \times \frac{\partial}{\partial a} \bigg\{ F_D^{(p)} \bigg( \frac{1+p}{2}, \underbrace{\frac{1}{2}, \dots, \frac{1}{2}}_p ; a + \frac{1+p}{2} ; 1 - \frac{1}{\lambda_1}, \dots, 1 - \frac{1}{\lambda_p} \bigg) \bigg\}\bigg|_{a=0} }
where F_D^{(p)}
is the Lauricella D
-hypergeometric function defined for p
variables:
\displaystyle{ F_D^{(p)}\left(a; b_1, ..., b_p; g; x_1, ..., x_p\right) = \sum\limits_{m_1 \geq 0} ... \sum\limits_{m_p \geq 0}{ \frac{ (a)_{m_1+...+m_p}(b_1)_{m_1} ... (b_p)_{m_p} }{ (g)_{m_1+...+m_p} } \frac{x_1^{m_1}}{m_1!} ... \frac{x_p^{m_p}}{m_p!} } }
Value
A numeric value: the Kullback-Leibler divergence between the two distributions,
with two attributes attr(, "epsilon")
(precision of the partial derivative of the Lauricella D
-hypergeometric function,see Details)
and attr(, "k")
(number of iterations).
Author(s)
Pierre Santagostini, Nizar Bouhlel
References
N. Bouhlel, D. Rousseau, A Generic Formula and Some Special Cases for the Kullback–Leibler Divergence between Central Multivariate Cauchy Distributions. Entropy, 24, 838, July 2022. doi:10.3390/e24060838
Examples
Sigma1 <- matrix(c(1, 0.6, 0.2, 0.6, 1, 0.3, 0.2, 0.3, 1), nrow = 3)
Sigma2 <- matrix(c(1, 0.3, 0.1, 0.3, 1, 0.4, 0.1, 0.4, 1), nrow = 3)
kldcauchy(Sigma1, Sigma2)
kldcauchy(Sigma2, Sigma1)
Sigma1 <- matrix(c(0.5, 0, 0, 0, 0.4, 0, 0, 0, 0.3), nrow = 3)
Sigma2 <- diag(1, 3)
# Case when all eigenvalues of Sigma1 %*% solve(Sigma2) are < 1
kldcauchy(Sigma1, Sigma2)
# Case when all eigenvalues of Sigma1 %*% solve(Sigma2) are > 1
kldcauchy(Sigma2, Sigma1)