new_state {matrixdist}R Documentation

New state in a Markov jump process

Description

Given a transition matrix Q, a uniform value u, and a previous state k, it returns the new state of a Markov jump process.

Usage

new_state(prev_state, cum_embedded_mc, u)

Arguments

prev_state

Previous state of the Markov jump process.

cum_embedded_mc

Transition matrix.

u

Random value in (0,1).

Value

Next state of the Markov jump process.


[Package matrixdist version 1.1.9 Index]