direct_numerical_maximization {HMMpa}R Documentation

Estimation by Directly Maximizing the log-Likelihood

Description

Estimates the parameters of a (stationary) discrete-time hidden Markov model by directly maximizing the log-likelihood of the model using the nlm-function. See MacDonald & Zucchini (2009, Paragraph 3) for further details.

Usage

direct_numerical_maximization(x, m, delta, gamma, 
     distribution_class, distribution_theta, 
     DNM_limit_accuracy = 0.001, DNM_max_iter = 50, 
     DNM_print = 2)

Arguments

x

a vector object containing the time-series of observations that are assumed to be realizations of the (hidden Markov state dependent) observation process of the model.

m

a (finite) number of states in the hidden Markov chain.

delta

a vector object containing starting values for the marginal probability distribution of the m states of the Markov chain at the time point t=1. This implementation of the algorithm uses the stationary distribution as delta.

gamma

a matrix (nrow=ncol=m) containing starting values for the transition matrix of the hidden Markov chain.

distribution_class

a single character string object with the abbreviated name of the m observation distributions of the Markov dependent observation process. The following distributions are supported by this algorithm: Poisson (pois); generalized Poisson (genpois); normal (norm, discrete log-Likelihood not applicable by this algorithm).

distribution_theta

a list object containing starting values for the parameters of the m observation distributions that are dependent on the hidden Markov state.

DNM_limit_accuracy

a single numerical value representing the convergence criterion of the direct numerical maximization algorithm using the nlm-function. Default value is 0.001.

DNM_max_iter

a single numerical value representing the maximum number of iterations of the direct numerical maximization using the nlm-function. Default value is 50.

DNM_print

a single numerical value to determine the level of printing of the nlm-function. See nlm-function for further informations. The value 0 suppresses, that no printing will be outputted. Default value is 2 for full printing.

Value

direct_numerical_maximization returns a list containing the estimated parameters of the hidden Markov model and other components.

x

input time-series of observations.

m

input number of hidden states in the Markov chain.

logL

a numerical value representing the logarithmized likelihood calculated by the forward_backward_algorithm.

AIC

a numerical value representing Akaike's information criterion for the hidden Markov model with estimated parameters.

BIC

a numerical value representing the Bayesian information criterion for the hidden Markov model with estimated parameters.

delta

a vector object containing the estimates for the marginal probability distribution of the m states of the Markov chain at time-point point t=1.

gamma

a matrix containing the estimates for the transition matrix of the hidden Markov chain.

distribution_theta

a list object containing estimates for the parameters of the m observation distributions that are dependent on the hidden Markov state.

distribution_class

input distribution class.

Author(s)

The basic algorithm of a Poisson-HMM is provided by MacDonald & Zucchini (2009, Paragraph A.1). Extension and implementation by Vitali Witowski (2013).

References

MacDonald, I. L., Zucchini, W. (2009) Hidden Markov Models for Time Series: An Introduction Using R, Boca Raton: Chapman & Hall.

See Also

HMM_based_method, HMM_training, Baum_Welch_algorithm, forward_backward_algorithm,

initial_parameter_training

Examples


################################################################
### Fictitious observations ####################################
################################################################

x <- c(1,16,19,34,22,6,3,5,6,3,4,1,4,3,5,7,9,8,11,11,
  14,16,13,11,11,10,12,19,23,25,24,23,20,21,22,22,18,7,
  5,3,4,3,2,3,4,5,4,2,1,3,4,5,4,5,3,5,6,4,3,6,4,8,9,12,
  9,14,17,15,25,23,25,35,29,36,34,36,29,41,42,39,40,43,
  37,36,20,20,21,22,23,26,27,28,25,28,24,21,25,21,20,21,
  11,18,19,20,21,13,19,18,20,7,18,8,15,17,16,13,10,4,9,
  7,8,10,9,11,9,11,10,12,12,5,13,4,6,6,13,8,9,10,13,13,
  11,10,5,3,3,4,9,6,8,3,5,3,2,2,1,3,5,11,2,3,5,6,9,8,5,
  2,5,3,4,6,4,8,15,12,16,20,18,23,18,19,24,23,24,21,26,
  36,38,37,39,45,42,41,37,38,38,35,37,35,31,32,30,20,39,
  40,33,32,35,34,36,34,32,33,27,28,25,22,17,18,16,10,9,
  5,12,7,8,8,9,19,21,24,20,23,19,17,18,17,22,11,12,3,9,
  10,4,5,13,3,5,6,3,5,4,2,5,1,2,4,4,3,2,1) 


### Assummptions (number of states, probability vector, 
### transition matrix, and distribution parameters)
    m <-4
delta <- c(0.25,0.25,0.25,0.25)
gamma <- 0.7 * diag(m) + rep(0.3 / m)
distribution_class <- "pois"
distribution_theta <- list(lambda = c(4,9,17,25))

### Estimation of a HMM using the method of 
### direct numerical maximization

trained_HMM_with_m_hidden_states <- 
		direct_numerical_maximization(x = x, 
      m = m, 
      delta = delta, 
      gamma = gamma, 
      distribution_class = distribution_class,
      DNM_max_iter=100,
      distribution_theta = distribution_theta)

print(trained_HMM_with_m_hidden_states)


[Package HMMpa version 1.0.1 Index]