mcmc_replica_exchange_mc {tfprobability}R Documentation

Runs one step of the Replica Exchange Monte Carlo

Description

Replica Exchange Monte Carlo is a Markov chain Monte Carlo (MCMC) algorithm that is also known as Parallel Tempering. This algorithm performs multiple sampling with different temperatures in parallel, and exchanges those samplings according to the Metropolis-Hastings criterion. The K replicas are parameterized in terms of inverse_temperature's, ⁠(beta[0], beta[1], ..., beta[K-1])⁠. If the target distribution has probability density p(x), the kth replica has density p(x)**beta_k.

Usage

mcmc_replica_exchange_mc(
  target_log_prob_fn,
  inverse_temperatures,
  make_kernel_fn,
  swap_proposal_fn = tfp$mcmc$replica_exchange_mc$default_swap_proposal_fn(1),
  state_includes_replicas = FALSE,
  seed = NULL,
  name = NULL
)

Arguments

target_log_prob_fn

Function which takes an argument like current_state (if it's a list current_state will be unpacked) and returns its (possibly unnormalized) log-density under the target distribution.

inverse_temperatures

⁠1D⁠ Tensor of inverse temperatures to perform samplings with each replica. Must have statically known shape. inverse_temperatures[0] produces the states returned by samplers, and is typically == 1.

make_kernel_fn

Function which takes target_log_prob_fn and seed args and returns a TransitionKernel instance.

swap_proposal_fn

function which take a number of replicas, and return combinations of replicas for exchange.

state_includes_replicas

Boolean indicating whether the leftmost dimension of each state sample should index replicas. If TRUE, the leftmost dimension of the current_state kwarg to tfp.mcmc.sample_chain will be interpreted as indexing replicas.

seed

integer to seed the random number generator.

name

string prefixed to Ops created by this function. Default value: NULL (i.e., "remc_kernel").

Details

Typically beta[0] = 1.0, and ⁠1.0 > beta[1] > beta[2] > ... > 0.0⁠.

Value

list of next_state (Tensor or Python list of Tensors representing the state(s) of the Markov chain(s) at each result step. Has same shape as and current_state.) and kernel_results (collections$namedtuple of internal calculations used to 'advance the chain).

See Also

Other mcmc_kernels: mcmc_dual_averaging_step_size_adaptation(), mcmc_hamiltonian_monte_carlo(), mcmc_metropolis_adjusted_langevin_algorithm(), mcmc_metropolis_hastings(), mcmc_no_u_turn_sampler(), mcmc_random_walk_metropolis(), mcmc_simple_step_size_adaptation(), mcmc_slice_sampler(), mcmc_transformed_transition_kernel(), mcmc_uncalibrated_hamiltonian_monte_carlo(), mcmc_uncalibrated_langevin(), mcmc_uncalibrated_random_walk()


[Package tfprobability version 0.15.1 Index]