post_correct {bssm}R Documentation

Run Post-correction for Approximate MCMC using ψ-APF

Description

Function post_correct updates previously obtained approximate MCMC output with post-correction weights leading to asymptotically exact weighted posterior, and returns updated MCMC output where components weights, posterior, alpha, alphahat, and Vt are updated (depending on the original output type).

Usage

post_correct(
  model,
  mcmc_output,
  particles,
  threads = 1L,
  is_type = "is2",
  seed = sample(.Machine$integer.max, size = 1)
)

Arguments

model

Model of class nongaussian or ssm_nlg.

mcmc_output

An output from run_mcmc used to compute the MAP estimate of theta. While the intended use assumes this is from approximate MCMC, it is not actually checked, i.e., it is also possible to input previous (asymptotically) exact output.

particles

Number of particles for ψ-APF (positive integer).

threads

Number of parallel threads (positive integer, default is 1).

is_type

Type of IS-correction. Possible choices are "is3" for simple importance sampling (weight is computed for each MCMC iteration independently), "is2" for jump chain importance sampling type weighting (default), or "is1" for importance sampling type weighting where the number of particles used forweight computations is proportional to the length of the jump chain block.

seed

Seed for the random number generator (positive integer).

Value

List with suggested number of particles N and matrix containing estimated standard deviations of the log-weights and corresponding number of particles.

References

A. Doucet, M. K. Pitt, G. Deligiannidis, R. Kohn (2018). Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. Biometrika, 102, 2, 295-313, https://doi.org/10.1093/biomet/asu075

Vihola, M, Helske, J, Franks, J (2020). Importance sampling type estimators based on approximate marginal Markov chain Monte Carlo. Scand J Statist. 1-38. https://doi.org/10.1111/sjos.12492

Examples


set.seed(1)
n <- 300
x1 <- sin((2 * pi / 12) * 1:n)
x2 <- cos((2 * pi / 12) * 1:n)
alpha <- numeric(n)
alpha[1] <- 0
rho <- 0.7
sigma <- 2
mu <- 1
for(i in 2:n) {
  alpha[i] <- rnorm(1, mu * (1 - rho) + rho * alpha[i-1], sigma)
}
u <- rpois(n, 50)
y <- rbinom(n, size = u, plogis(0.5 * x1 + x2 + alpha))

ts.plot(y / u)

model <- ar1_ng(y, distribution = "binomial", 
  rho = uniform(0.5, -1, 1), sigma = gamma_prior(1, 2, 0.001),
  mu = normal(0, 0, 10),
  xreg = cbind(x1,x2), beta = normal(c(0, 0), 0, 5),
  u = u)

out_approx <- run_mcmc(model, mcmc_type = "approx", 
  local_approx = FALSE, iter = 50000)

out_is2 <- post_correct(model, out_approx, particles = 30,
  threads = 2)
out_is2$time

summary(out_approx, return_se = TRUE)
summary(out_is2, return_se = TRUE)

# latent state
library("dplyr")
library("ggplot2")
state_approx <- as.data.frame(out_approx, variable = "states") %>% 
  group_by(time) %>%
  summarise(mean = mean(value))
  
state_exact <- as.data.frame(out_is2, variable = "states") %>% 
  group_by(time) %>%
  summarise(mean = weighted.mean(value, weight))

dplyr::bind_rows(approx = state_approx, 
  exact = state_exact, .id = "method") %>%
  filter(time > 200) %>%
ggplot(aes(time, mean, colour = method)) + 
  geom_line() + 
  theme_bw()

# posterior means
p_approx <- predict(out_approx, model, type = "mean", 
  nsim = 1000, future = FALSE) %>% 
  group_by(time) %>%
  summarise(mean = mean(value))
p_exact <- predict(out_is2, model, type = "mean", 
  nsim = 1000, future = FALSE) %>% 
  group_by(time) %>%
  summarise(mean = mean(value))

dplyr::bind_rows(approx = p_approx, 
  exact = p_exact, .id = "method") %>%
  filter(time > 200) %>%
ggplot(aes(time, mean, colour = method)) + 
  geom_line() + 
  theme_bw() 


[Package bssm version 1.1.7-1 Index]