ELHMC {elhmc}  R Documentation 
This function draws samples from a Empirical Likelihood Bayesian posterior distribution of parameters using Hamiltonian Monte Carlo.
ELHMC(initial, data, fun, dfun, prior, dprior, n.samples = 100,
lf.steps = 10, epsilon = 0.05, p.variance = 1, tol = 10^5,
detailed = FALSE, FUN, DFUN)
initial 
a vector containing the initial values of the parameters 
data 
a matrix containing the data 
fun 
the estimating function 
dfun 
a function that calculates the gradient of the estimating function

prior 
a function with one argument 
dprior 
a function with one argument 
n.samples 
number of samples to draw 
lf.steps 
number of leap frog steps in each Hamiltonian Monte Carlo update 
epsilon 
the leap frog step size(s). This has to be a single numeric
value or a vector of the same length as 
p.variance 
the covariance matrix of a multivariate normal distribution
used to generate the initial values of momentum 
tol 
EL tolerance 
detailed 
If this is set to 
FUN 
the same as 
DFUN 
the same as 
Suppose there are data x = (x_1, x_2, ..., x_n)
where x_i
takes values in R^p
and follow probability distribution F
.
Also, F
comes from a family of distributions that depends on
a parameter \theta = (\theta_1, ..., \theta_d)
and there is
a smooth function
g(x_i, \theta) = (g_1(x_i, \theta), ...,g_q(x_i, \theta))^T
that
satisfies E_F[g(x_i, \theta)] = 0
for i = 1, ...,n
.
ELHMC
draws samples from a Empirical Likelihood Bayesian
posterior distribution of the parameter \theta
, given the data x
as data
, the smoothing function g
as fun
,
and the gradient of g
as dfun
or G(X) =
(g(x_1), g(x_2), ..., g(x_n))^T
as FUN
and the gradient of G
as DFUN
.
The function returns a list with the following elements:
samples 
A matrix containing the parameter samples 
acceptance.rate 
The acceptance rate 
call 
The matched call 
If detailed = TRUE
, the list contains these extra elements:
proposed 
A matrix containing the proposed values at

acceptance 
A vector of 
trajectory 
A list with 2 elements 
Chaudhuri, S., Mondal, D. and Yin, T. (2015) Hamiltonian Monte Carlo sampling in Bayesian empirical likelihood computation. Journal of the Royal Statistical Society: Series B.
Neal, R. (2011) MCMC for using Hamiltonian dynamics. Handbook of Markov Chain Monte Carlo (eds S. Brooks, A.Gelman, G. L.Jones and X.L. Meng), pp. 113162. New York: Taylor and Francis.
## Not run:
## Suppose there are four data points (1, 1), (1, 1), (1, 1), (1, 1)
x = rbind(c(1, 1), c(1, 1), c(1, 1), c(1, 1))
## If the parameter of interest is the mean, the smoothing function and
## its gradient would be
f < function(params, x) {
x  params
}
df < function(params, x) {
rbind(c(1, 0), c(0, 1))
}
## Draw 50 samples from the Empirical Likelihood Bayesian posterior distribution
## of the mean, using initial values (0.96, 0.97) and standard normal distributions
## as priors:
normal_prior < function(x) {
exp(0.5 * x[1] ^ 2) / sqrt(2 * pi) * exp(0.5 * x[2] ^ 2) / sqrt(2 * pi)
}
normal_prior_log_gradient < function(x) {
x
}
set.seed(1234)
mean.samples < ELHMC(initial = c(0.96, 0.97), data = x, fun = f, dfun = df,
n.samples = 50, prior = normal_prior,
dprior = normal_prior_log_gradient)
plot(mean.samples$samples, type = "l", xlab = "", ylab = "")
## End(Not run)