continuous_entropy {ForeCA} | R Documentation |
Shannon entropy for a continuous pdf
Description
Computes the Shannon entropy \mathcal{H}(p)
for a continuous
probability density function (pdf) p(x)
using numerical integration.
Usage
continuous_entropy(pdf, lower, upper, base = 2)
Arguments
pdf |
R function for the pdf |
lower , upper |
lower and upper integration limit. |
base |
logarithm base; entropy is measured in “nats” for
|
Details
The Shannon entropy of a continuous random variable (RV) X \sim p(x)
is defined as
\mathcal{H}(p) = -\int_{-\infty}^{\infty} p(x) \log p(x) d x.
Contrary to discrete RVs, continuous RVs can have negative entropy (see Examples).
Value
scalar; entropy value (real).
Since continuous_entropy
uses numerical integration (integrate()
) convergence
is not garantueed (even if integral in definition of \mathcal{H}(p)
exists).
Issues a warning if integrate()
does not converge.
See Also
Examples
# entropy of U(a, b) = log(b - a). Thus not necessarily positive anymore, e.g.
continuous_entropy(function(x) dunif(x, 0, 0.5), 0, 0.5) # log2(0.5)
# Same, but for U(-1, 1)
my_density <- function(x){
dunif(x, -1, 1)
}
continuous_entropy(my_density, -1, 1) # = log(upper - lower)
# a 'triangle' distribution
continuous_entropy(function(x) x, 0, sqrt(2))