Tensor EVD {tensorEVD}R Documentation

Tensor EVD

Description

Fast eigen value decomposition (EVD) of the Hadamard product of two matrices

Usage

tensorEVD(K1, K2, ID1, ID2, alpha = 1.0,
          EVD1 = NULL, EVD2 = NULL,
          d.min = .Machine$double.eps, 
          make.dimnames = FALSE, verbose = FALSE)

Arguments

K1, K2

(numeric) Covariance structure matrices

ID1

(character/integer) Vector of length n with either names or indices mapping from rows/columns of K1 into the resulting tensor product

ID2

(character/integer) Vector of length n with either names or indices mapping from rows/columns of K2 into the resulting tensor product

alpha

(numeric) Proportion of variance of the tensor product to be explained by the tensor eigenvectors

EVD1

(list) (Optional) Eigenvectors and eigenvalues of K1 as produced by the eigen function

EVD2

(list) (Optional) Eigenvectors and eigenvalues of K2 as produced by the eigen function

d.min

(numeric) Tensor eigenvalue threshold. Default is a numeric zero. Only eigenvectors with eigenvalue passing this threshold are returned

make.dimnames

TRUE or FALSE to whether add rownames and colnames attributes to the output

verbose

TRUE or FALSE to whether show progress

Details

Let the n × n matrix K to be the Hadamard product (aka element-wise or entry-wise product) involving two smaller matrices K1 and K2 of dimensions n1 and n2, respectively,

K = (Z1 K1 Z'1) ⊙ (Z2 K2 Z'2)

where Z1 and Z2 are incidence matrices mapping from rows (and columns) of the resulting Hadamard to rows (and columns) of K1 and K2, respectively.

Let the eigenvalue decomposition (EVD) of K1 and K2 to be K1 = V1 D1 V'1 and K2 = V2 D2 V'2. Using properties of the Hadamard and Kronecker products, an EVD of the Hadamard product K can be approximated using the EVD of K1 and K2 as

K = V D V'

where D = D1D2 is a diagonal matrix containing N = n1 × n2 tensor eigenvalues d1 ≥ ... ≥ dN ≥ 0 and V = (Z1Z2)(V1V2) = [v1,...,vN] is matrix containing N tensor eigenvectors vk; here the term Z1Z2 is the "face-splitting product" (aka "transposed Khatri–Rao product") of matrices Z1 and Z2.

Each tensor eigenvector k is derived separately as a Hadamard product using the corresponding i(k) and j(k) eigenvectors v1i(k) and v2j(k) from V1 and V2, respectively, this is

vk = (Z1v1i(k))⊙(Z2v2j(k))

The tensorEVD function derives each of these eigenvectors vk by matrix indexing using integer vectors ID1 and ID2. The entries of these vectors are the row (and column) number of K1 and K2 that are mapped at each row of Z1 and Z2, respectively.

Value

Returns a list object that contains the elements:

Examples

  require(tensorEVD)
  set.seed(195021)
  
  # Generate matrices K1 and K2 of dimensions n1 and n2
  n1 = 10; n2 = 15
  K1 = crossprod(matrix(rnorm(n1*(n1+10)), ncol=n1))
  K2 = crossprod(matrix(rnorm(n2*(n2+10)), ncol=n2))

  # (a) Example 1. Full design (Kronecker product)
  ID1 = rep(seq(n1), each=n2)
  ID2 = rep(seq(n2), times=n1)

  # Direct EVD of the Hadamard product
  K = K1[ID1,ID1]*K2[ID2,ID2]
  EVD0 = eigen(K)

  # Tensor EVD using K1 and K2
  EVD = tensorEVD(K1, K2, ID1, ID2)

  # Eigenvectors and eigenvalues are numerically equal
  all.equal(EVD0$values, EVD$values)
  all.equal(abs(EVD0$vectors), abs(EVD$vectors)) 

  # (b) If a proportion of variance explained is specified, 
  # only the eigenvectors needed to explain such proportion are derived
  alpha = 0.95
  EVD = tensorEVD(K1, K2, ID1, ID2, alpha=alpha)
  dim(EVD$vectors)

  # For the direct EVD
  varexp = cumsum(EVD0$values/sum(EVD0$values))
  index = 1:which.min(abs(varexp-alpha))
  dim(EVD0$vectors[,index])

  # (c) Example 2. Incomplete design (Hadamard product)
  # Eigenvectors and eigenvalues are no longer equivalent
  n = n1*n2   # Sample size n
  ID1 = sample(seq(n1), n, replace=TRUE) # Randomly sample of ID1
  ID2 = sample(seq(n2), n, replace=TRUE) # Randomly sample of ID2

  K = K1[ID1,ID1]*K2[ID2,ID2]
  EVD0 = eigen(K)
  EVD = tensorEVD(K1, K2, ID1, ID2)

  all.equal(EVD0$values, EVD$values)
  all.equal(abs(EVD0$vectors), abs(EVD$vectors)) 
  
  # However, the sum of the eigenvalues is equal to the trace(K)
  c(sum(EVD0$values), sum(EVD$values), sum(diag(K)))

  # And provide the same approximation for K
  K01 = EVD0$vectors%*%diag(EVD0$values)%*%t(EVD0$vectors)
  K02 = EVD$vectors%*%diag(EVD$values)%*%t(EVD$vectors)
  c(all.equal(K,K01), all.equal(K,K02))

  # When n is different from N=n1xn2, both methods provide different 
  # number or eigenvectors/eigenvalues. The eigen function provides 
  # a number of eigenvectors equal to the minimum between n and N
  # for the tensorEVD, this number is always N

  # (d) Sample size n being half of n1 x n2
  n = n1*n2/2
  ID1 = sample(seq(n1), n, replace=TRUE)
  ID2 = sample(seq(n2), n, replace=TRUE)

  K = K1[ID1,ID1]*K2[ID2,ID2]
  EVD0 = eigen(K)
  EVD = tensorEVD(K1, K2, ID1, ID2)

  c(eigen=sum(EVD0$values>1E-10), tensorEVD=sum(EVD$values>1E-10))

  # (e) Sample size n being twice n1 x n2
  n = n1*n2*2
  ID1 = sample(seq(n1), n, replace=TRUE)
  ID2 = sample(seq(n2), n, replace=TRUE)

  K = K1[ID1,ID1]*K2[ID2,ID2]
  EVD0 = eigen(K)
  EVD = tensorEVD(K1, K2, ID1, ID2)

  c(eigen=sum(EVD0$values>1E-10), tensorEVD=sum(EVD$values>1E-10))

[Package tensorEVD version 0.1.3 Index]