forward_g {LaMa} | R Documentation |
General forward algorithm with time-varying transition probability matrix
Description
General forward algorithm with time-varying transition probability matrix
Usage
forward_g(delta, Gamma, allprobs, trackInd = NULL)
Arguments
delta |
Initial distribution of length N, or matrix of dimension c(k,N) for k independent tracks, if trackInd is provided. |
Gamma |
Array of transition probability matrices of dimension c(N,N,n-1), as in a time series of length n, there are only n-1 transitions.
If you provide an array of dimension c(N,N,n), the first slice will be ignored. If the elements of If trackInd is provided, Gamma needs to be an array of dimension c(N,N,n), matching the number of rows of allprobs. For each track, the transition matrix at the beginning will be ignored.
If the parameters for Gamma are pooled across tracks or not, depends on your calculation of Gamma. If pooled, you can use tpm_g(Z, beta) to calculate the entire array of transition matrices when Z is of dimension c(n,p). This function can also be used to fit continuous-time HMMs, where each array entry is the Markov semigroup |
allprobs |
Matrix of state-dependent probabilities/ density values of dimension c(n, N) |
trackInd |
Optional vector of length k containing the indices that correspond to the beginning of separate tracks. If provided, the total log-likelihood will be the sum of each track's likelihood contribution. In this case, Gamma needs to be an array of dimension c(N,N,n), matching the number of rows of allprobs. For each track, the transition matrix at the beginning of the track will be ignored (as there is no transition between tracks). Furthermore, instead of a single vector delta corresponding to the initial distribution, a delta matrix of initial distributions, of dimension c(k,N), can be provided, such that each track starts with it's own initial distribution. |
Value
Log-likelihood for given data and parameters
Examples
## generating data from inhomogeneous 2-state HMM
mu = c(0, 6)
sigma = c(2, 4)
beta = matrix(c(-2,-2,0.5,-0.5),nrow=2)
delta = c(0.5, 0.5)
# simulation
n = 2000
s = x = rep(NA, n)
z = rnorm(n, 0, 2)
s[1] = sample(1:2, 1, prob = delta)
x[1] = rnorm(1, mu[s[1]], sigma[s[1]])
for(t in 2:n){
Gamma = diag(2)
Gamma[!Gamma] = exp(beta[,1]+beta[,2]*z[t])
Gamma = Gamma / rowSums(Gamma)
s[t] = sample(1:2, 1, prob = Gamma[s[t-1],])
x[t] = rnorm(1, mu[s[t]], sigma[s[t]])
}
## negative log likelihood function
mllk = function(theta.star, x, z){
# parameter transformations for unconstraint optimization
beta = matrix(theta.star[1:4], 2, 2)
Gamma = tpm_g(Z = z, beta = beta)
delta = c(plogis(theta.star[5]), 1-plogis(theta.star[5]))
mu = theta.star[6:7]
sigma = exp(theta.star[8:9])
# calculate all state-dependent probabilities
allprobs = matrix(1, length(x), 2)
for(j in 1:2){ allprobs[,j] = dnorm(x, mu[j], sigma[j]) }
# return negative for minimization
-forward_g(delta, Gamma, allprobs)
}
## fitting an HMM to the data
theta.star = c(-2,-2,1,-1,0,0,5,log(2),log(3))
mod = nlm(mllk, theta.star, x = x, z = z)