| multiness_fit {multiness} | R Documentation | 
Fit the MultiNeSS model
Description
multiness_fit fits the Gaussian or logistic MultiNeSS model
with various options for parameter tuning.
Usage
multiness_fit(A,model,self_loops,refit,tuning,tuning_opts,optim_opts)
Arguments
| A | An n \times n \times marray containing edge entries for
an undirected multiplex network onnnodes andmlayers. | 
| model | A string which provides choice of model,
either 'gaussian'or'logistic'. Defaults to'gaussian'. | 
| self_loops | A Boolean, if FALSE, all diagonal entries are ignored in
optimization. Defaults toTRUE. | 
| refit | A Boolean, if TRUE, a refitting step is performed to
debias the eigenvalues of the estimates. Defaults toTRUE. | 
| tuning | A string which provides the tuning method, valid options are
'fixed','adaptive', or'cv'. Defaults to'adaptive'. | 
| tuning_opts | A list, containing additional optional arguments controlling
parameter tuning. The arguments used depends on the choice of tuning method.
If tuning='fixed',multiness_fitwill utilize the following
arguments: 
lambdaA positive scalar,
the \lambdaparameter in the nuclear norm penalty, see Details.
Defaults to2.309 * sqrt(n*m).alphaA positive scalar or numeric vector of length m, the parameters\alpha_kin the
nuclear norm penalty, see Details. If a scalar is provided all\alpha_kparameters are set to that
value. Defaults to1/sqrt(m) If tuning='adaptive',multiness_fitwill utilize the following
arguments: 
layer_wiseA Boolean, if TRUE, the entry-wise variance
is estimated individually for each layer. Otherwise the estimates are
pooled. Defaults toTRUE.penalty_constA positive scalar Cwhich scales the
penalty parameters (see Details).
Defaults to2.309.penalty_const_lambdaA positive scalar cwhich scales only the\lambdapenalty parameter (see Details).
Defaults to1. If tuning='cv',multiness_fitwill utilize the following
arguments: 
layer_wiseA Boolean, if TRUE, the entry-wise variance
is estimated individually for each layer. Otherwise the estimates are
pooled. Defaults toTRUE.N_cvA positive integer, the number of repetitions of edge
cross-validation performed for each parameter setting. Defaults to 3.p_cvA positive scalar in the interval (0,1), the proportion
of edge entries held out in edge cross-validation. Defaults to 0.1.penalty_const_lambdaA positive scalar cwhich scales only the\lambdapenalty parameter (see Details).
Defaults to1.penalty_const_vecA numeric vector with positive entries, the candidate
values of constant Cto scale the penalty parameters (see Details).
An optimal constant is chosen by edge cross-validation. Defaults toc(1,1.5,...,3.5,4).refit_cvA Boolean, if TRUE, a refitting step is
performed when fitting the model for edge cross-validation. Defaults
toTRUEverbose_cvA Boolean, if TRUE, console output will
provide updates on the progress of edge cross-validation. Defaults
toFALSE. | 
| optim_opts | A list, containing additional optional arguments controlling
the proximal gradient descent algorithm.
 
check_objA Boolean, if TRUE, convergence is determined
by checking the decrease in the objective. Otherwise it is determined by
checking the average entry-wise difference in consecutive values ofF.
Defaults toTRUE.eig_maxitrA positive integer, maximum iterations for internal
eigenvalue solver. Defaults to 1000.eig_precA positive scalar, estimated eigenvalues below this
threshold are set to zero. Defaults to 1e-2.epsA positive scalar, convergence threshold for proximal gradient
descent. Defaults to 1e-6.etaA positive scalar, step size for proximal gradient descent.
Defaults to 1for the Gaussian model,5for the logistic
model.initA string, initialization method. Valid options are
'fix'(using initializersoptim_opts$V_initandoptim_opts$U_init),'zero'(initialize all parameters at zero),
or'svd'(initialize with a truncated SVD with rankoptim_opts$init_rank).
Defaults to'zero'.K_maxA positive integer, maximum iterations for proximal gradient
descent. Defaults to 100.max_rankA positive integer, maximum rank for internal eigenvalue
solver. Defaults to sqrt(n).missing_patternAn n \times n \times mBoolean array withTRUEfor each observed entry andFALSEfor missing entries. If unspecified, it
is set to!is.na(A).positiveA Boolean, if TRUE, singular value thresholding only retains
positive eigenvalues. Defaults toFALSE.return_posnsA Boolean, if TRUE, returns estimates
of the latent positions based on ASE. Defaults toFALSE.verboseA Boolean, if TRUE, console output will provide
updates on the progress of proximal gradient descent. Defaults toFALSE. | 
Details
A MultiNeSS model is fit to an n \times n \times m array A of
symmetric adjacency matrices on a common set of nodes. Fitting
proceeds by convex proximal gradient descent on the entries of
F = VV^{T} and G_k = U_kU_k^{T}, see
MacDonald et al., (2020),
Section 3.2. Additional optional arguments for
the gradient descent routine can be provided in optim_opts.
refit provides an option
to perform an additional refitting step to debias the eigenvalues
of the estimates, see
MacDonald et al., (2020), Section 3.3.
By default, multiness_fit will return estimates of the matrices
F and G_k. optim_opts$return_posns provides an option
to instead return estimates of latent positions V and U_k
based on the adjacency spectral embedding (if such a factorization exists).
Tuning parameters \lambda and \alpha_k in the nuclear norm penalty
\lambda ||F||_* + \sum_k \lambda \alpha_k ||G_k||_*
are either set by the
user (tuning='fixed'), selected adaptively using a
robust estimator of the
entry-wise variance (tuning='adaptive'), or
selected using edge cross-validation (tuning='cv'). For more details
see MacDonald et al., (2020),
Section 3.4. Additional optional arguments for parameter tuning
can be provided in tuning_opts.
Value
A list is returned with the MultiNeSS model estimates, dimensions of
the common and individual latent spaces, and some additional optimization
output:
| F_hat | An n \times nmatrix estimating the common part of the expected
adjacency matrix,F = VV^{T}. Ifoptim_opts$return_posnsisTRUE, this is not returned. | 
| G_hat | A list of length m, the collection ofn \times nmatrices
estimating the individual part of each adjacency matrix,G_k = U_kU_k^{T}.
Ifoptim_opts$return_posnsisTRUE, this is not returned. | 
| V_hat | A matrix estimating the common latent positions.
Returned if optim_opts$return_posnsisTRUE. | 
| U_hat | A list of length m, the collection of matrices
estimating the individual latent positions.
Returned ifoptim_opts$return_posnsisTRUE. | 
| d1 | A non-negative integer, the estimated common dimension of the
latent space. | 
| d2 | An integer vector of length m, the estimated individual
dimension of the latent space for each layer. | 
| K | A positive integer, the number of iterations run in proximal
gradient descent. | 
| convergence | An integer convergence code, 0if proximal
gradient descent converged in fewer thanoptim_opts$K_maxiterations,1otherwise. | 
| lambda | A positive scalar, the tuned \lambdapenalty parameter (see Details). | 
| alpha | A numeric vector of length m, the tuned\alphapenalty parameters
(see Details). | 
Examples
# gaussian model data
data1 <- multiness_sim(n=100,m=4,d1=2,d2=2,
                     model="gaussian")
# multiness_fit with fixed tuning
fit1 <- multiness_fit(A=data1$A,
                      model="gaussian",
                      self_loops=TRUE,
                      refit=FALSE,
                      tuning="fixed",
                      tuning_opts=list(lambda=40,alpha=1/2),
                      optim_opts=list(max_rank=20,verbose=TRUE))
# multiness_fit with adaptive tuning
fit2 <- multiness_fit(A=data1$A,
                      refit=TRUE,
                      tuning="adaptive",
                      tuning_opts=list(layer_wise=FALSE),
                      optim_opts=list(return_posns=TRUE))
# logistic model data
data2 <- multiness_sim(n=100,m=4,d1=2,d2=2,
                       model="logistic",
                       self_loops=FALSE)
# multiness_fit with cv tuning
fit3 <- multiness_fit(A=data2$A,
                      model="logistic",
                      self_loops=FALSE,
                      tuning="cv",
                      tuning_opts=list(N_cv=2,
                                       penalty_const_vec=c(1,2,2.309,3),
                                       verbose_cv=TRUE))
[Package 
multiness version 1.0.2 
Index]