{autoMFA}R Documentation

Incremental Automated Mixtures of Factor Analyzers


An alternative implementation of AMFA algorithm (Wang and Lin 2020). The number of factors, q, is estimated during the fitting process of each MFA model. Instead of employing a grid search over g like the AMFA method, this method starts with a 1 component MFA model and splits components according to their multivariate kurtosis. This uses the same approach as amofa (Kaya and Salah 2015). Once a component has been selected for splitting, the new components are initialised in the same manner as vbmfa (Ghahramani and Beal 2000). It keeps trying to split components until all components have had numTries splits attempted with no decrease in BIC, after which the current model is returned.

  numTries = 2,
  eta = 0.005,
  itmax = 500,
  tol = 1e-05,
  conv_measure = "diff",
  nkmeans = 1,
  nrandom = 1,
  varimax = FALSE



An n by p data matrix, where n is the number of observations and p is the number of dimensions of the data.


The number of attempts that should be made to split each component.


The smallest possible entry in any of the error matrices D_i (Zhao and Yu 2008).


The maximum number of ECM iterations allowed for the estimation of each MFA model.


The ECM algorithm terminates if the measure of convergence falls below this value.


The convergence criterion of the ECM algorithm. The default 'diff' stops the ECM iterations if |l^(k+1) - l^(k)| < tol where l^(k) is the log-likelihood at the kth ECM iteration. If 'ratio', then the convergence of the ECM iterations is measured using |(l^(k+1) - l^(k))/l^(k+1)|.


The number of times the k-means algorithm will be used to initialise the (single component) starting models.


The number of randomly initialised (single component) starting models.


Boolean indicating whether the output factor loading matrices should be constrained using varimax rotation or not.


A list containing the following elements:


Wang W, Lin T (2020). “Automated learning of mixtures of factor analysis models with missing information.” TEST. ISSN 1133-0686.

Kaya H, Salah AA (2015). “Adaptive Mixtures of Factor Analyzers.” arXiv preprint arXiv:1507.02801.

Ghahramani Z, Beal MJ (2000). “Variational inference for Bayesian Mixtures of Factor Analysers.” In Advances in neural information processing systems, 449–455.

Zhao J, Yu PLH (2008). “Fast ML Estimation for the Mixture of Factor Analyzers via an ECM Algorithm.” IEEE Transactions on Neural Networks, 19(11), 1956-1961. ISSN 1045-9227.

See Also

amofa vbmfa


RNGversion('4.0.3'); set.seed(3) <-, itmax = 1, numTries = 0)

[Package autoMFA version 1.0.0 Index]