train_magma {MagmaClustR} | R Documentation |
Training Magma with an EM algorithm
Description
The hyper-parameters and the hyper-posterior distribution involved in Magma
can be learned thanks to an EM algorithm implemented in train_magma
.
By providing a dataset, the model hypotheses (hyper-prior mean parameter and
covariance kernels) and initialisation values for the hyper-parameters, the
function computes maximum likelihood estimates of the HPs as well as the
mean and covariance parameters of the Gaussian hyper-posterior distribution
of the mean process.
Usage
train_magma(
data,
prior_mean = NULL,
ini_hp_0 = NULL,
ini_hp_i = NULL,
kern_0 = "SE",
kern_i = "SE",
common_hp = TRUE,
grid_inputs = NULL,
pen_diag = 1e-10,
n_iter_max = 25,
cv_threshold = 0.001,
fast_approx = FALSE
)
Arguments
data |
A tibble or data frame. Required columns: |
prior_mean |
Hyper-prior mean parameter (m_0) of the mean GP. This argument can be specified under various formats, such as:
|
ini_hp_0 |
A named vector, tibble or data frame of hyper-parameters
associated with |
ini_hp_i |
A tibble or data frame of hyper-parameters
associated with |
kern_0 |
A kernel function, associated with the mean GP. Several popular kernels (see The Kernel Cookbook) are already implemented and can be selected within the following list:
|
kern_i |
A kernel function, associated with the individual GPs. ("SE", "PERIO" and "RQ" are also available here). |
common_hp |
A logical value, indicating whether the set of hyper-parameters is assumed to be common to all individuals. |
grid_inputs |
A vector, indicating the grid of additional reference inputs on which the mean process' hyper-posterior should be evaluated. |
pen_diag |
A number. A jitter term, added on the diagonal to prevent numerical issues when inverting nearly singular matrices. |
n_iter_max |
A number, indicating the maximum number of iterations of the EM algorithm to proceed while not reaching convergence. |
cv_threshold |
A number, indicating the threshold of the likelihood gain
under which the EM algorithm will stop. The convergence condition is
defined as the difference of likelihoods between two consecutive steps,
divided by the absolute value of the last one
( |
fast_approx |
A boolean, indicating whether the EM algorithm should stop after only one iteration of the E-step. This advanced feature is mainly used to provide a faster approximation of the model selection procedure, by preventing any optimisation over the hyper-parameters. |
Details
The user can specify custom kernel functions for the argument
kern_0
and kern_i
. The hyper-parameters used in the kernel
should have explicit names, and be contained within the hp
argument. hp
should typically be defined as a named vector or a
data frame. Although it is not mandatory for the train_magma
function to run, gradients can be provided within kernel function
definition. See for example se_kernel
to create a custom
kernel function displaying an adequate format to be used in Magma.
Value
A list, gathering the results of the EM algorithm used for training in Magma. The elements of the list are:
hp_0: A tibble of the trained hyper-parameters for the mean process' kernel.
hp_i: A tibble of all the trained hyper-parameters for the individual processes' kernels.
hyperpost: A sub-list gathering the parameters of the mean processes' hyper-posterior distributions, namely:
mean: A tibble, the hyper-posterior mean parameter (
Output
) evaluated at each training referenceInput
.cov: A matrix, the covariance parameter for the hyper-posterior distribution of the mean process.
pred: A tibble, the predicted mean and variance at
Input
for the mean process' hyper-posterior distribution under a format that allows the direct visualisation as a GP prediction.
ini_args: A list containing the initial function arguments and values for the hyper-prior mean, the hyper-parameters. In particular, if those arguments were set to NULL,
ini_args
allows us to retrieve the (randomly chosen) initialisations used during training.seq_loglikelihood: A vector, containing the sequence of log-likelihood values associated with each iteration.
converged: A logical value indicated whether the EM algorithm converged or not.
training_time: Total running time of the complete training.
Examples
TRUE