tensor_regress {tensorregress} | R Documentation |
Supervised Tensor Decomposition with Interactive Side Information
Description
Supervised tensor decomposition with interactive side information on multiple modes. Main function in the package. The function takes a response tensor, multiple side information matrices, and a desired Tucker rank as input. The output is a rank-constrained M-estimate of the core tensor and factor matrices.
Usage
tensor_regress(
tsr,
X_covar1 = NULL,
X_covar2 = NULL,
X_covar3 = NULL,
core_shape,
niter = 20,
cons = c("non", "vanilla", "penalty"),
lambda = 0.1,
alpha = 1,
solver = "CG",
dist = c("binary", "poisson", "normal"),
traj_long = FALSE,
initial = c("random", "QR_tucker")
)
Arguments
tsr |
response tensor with 3 modes |
X_covar1 |
side information on first mode |
X_covar2 |
side information on second mode |
X_covar3 |
side information on third mode |
core_shape |
the Tucker rank of the tensor decomposition |
niter |
max number of iterations if update does not convergence |
cons |
the constraint method, "non" for without constraint, "vanilla" for global scale down at each iteration, "penalty" for adding log-barrier penalty to object function |
lambda |
penalty coefficient for "penalty" constraint |
alpha |
max norm constraint on linear predictor |
solver |
solver for solving object function when using "penalty" constraint, see "details" |
dist |
distribution of the response tensor, see "details" |
traj_long |
if "TRUE", set the minimal iteration number to 8; if "FALSE", set the minimal iteration number to 0 |
initial |
initialization of the alternating optimiation, "random" for random initialization, "QR_tucker" for deterministic initialization using tucker decomposition |
Details
Constraint penalty
adds log-barrier regularizer to
general object function (negative log-likelihood). The main function uses solver in function "optim" to
solve the objective function. The "solver" passes to the argument "method" in function "optim".
dist
specifies three distributions of response tensor: binary, poisson and normal distribution.
If dist
is set to "normal" and initial
is set to "QR_tucker", then the function returns the results after initialization.
Value
a list containing the following:
W
a list of orthogonal factor matrices - one for each mode, with the number of columns given by core_shape
G
an array, core tensor with the size specified by core_shape
C_ts
an array, coefficient tensor, Tucker product of G
,A
,B
,C
U
linear predictor,i.e. Tucker product of C_ts
,X_covar1
,X_covar2
,X_covar3
lglk
a vector containing loglikelihood at convergence
sigma
a scalar, estimated error variance (for Gaussian tensor) or dispersion parameter (for Bernoulli and Poisson tensors)
violate
a vector listing whether each iteration violates the max norm constraint on the linear predictor, 1
indicates violation
Examples
seed = 34
dist = 'binary'
data=sim_data(seed, whole_shape = c(20,20,20), core_shape=c(3,3,3),
p=c(5,5,5),dist=dist, dup=5, signal=4)
re = tensor_regress(data$tsr[[1]],data$X_covar1,data$X_covar2,data$X_covar3,
core_shape=c(3,3,3),niter=10, cons = 'non', dist = dist,initial = "random")