sma {epca} | R Documentation |
Sparse Matrix Approximation
Description
Perform the sparse matrix approximation (SMA) of a data matrix x
as three multiplicative components: z
, b
, and t(y)
,
where z
and y
are sparse, and b
is low-rank but not necessarily diagonal.
Usage
sma(
x,
k = min(5, dim(x)),
gamma = NULL,
rotate = c("varimax", "absmin"),
shrink = c("soft", "hard"),
center = FALSE,
scale = FALSE,
normalize = FALSE,
order = FALSE,
flip = FALSE,
max.iter = 1000,
epsilon = 1e-05,
quiet = TRUE
)
Arguments
x |
|
k |
|
gamma |
|
rotate |
|
shrink |
|
center |
|
scale |
|
normalize |
|
order |
|
flip |
|
max.iter |
|
epsilon |
|
quiet |
|
Details
rotate
: The rotate
option specifies the rotation technique to
use. Currently, there are two build-in options—“varimax” and “absmin”.
The “varimax” rotation maximizes the element-wise L4 norm of the rotated
matrix. It is faster and computationally more stable. The “absmin”
rotation minimizes the absolute sum of the rotated matrix. It is sharper
(as it directly minimizes the L1 norm) but slower and computationally
less stable.
shrink
: The shrink
option specifies the shrinkage operator to
use. Currently, there are two build-in options—“soft”- and
“hard”-thresholding. The “soft”-thresholding universally reduce all
elements and sets the small elements to zeros. The “hard”-thresholding
only sets the small elements to zeros.
normalize
: The argument normalize
gives an indication of if and
how any normalization should be done before rotation, and then undone
after rotation. If normalize is FALSE
(the default) no normalization
is done. If normalize is TRUE
then Kaiser normalization is done. (So
squared row entries of normalized x
sum to 1.0. This is sometimes
called Horst normalization.) For rotate="absmin"
, if normalize
is a
vector of length equal to the number of indicators (i.e., the number of
rows of x
), then the columns are divided by normalize
before
rotation and multiplied by normalize
after rotation. Also, If
normalize
is a function then it should take x
as an argument and
return a vector which is used like the vector above.
order
: In PCA (and SVD), the principal components (and the
singular vectors) are ordered. For this, we order the sparse components
(i.e., the columns of z
or y
) by their explained variance in the
data, which is defined as sum((x %*% y)^2)
, where y is a column of the
sparse component. Note: not to be confused with the cumulative
proportion of variance explained by y
(and z
), particularly when y
(and z
) is may not be strictly orthogonal.
flip
: The argument flip
gives an indication of if and the
columns of estimated sparse component should be flipped. Note that the
estimated (sparse) loadings, i.e., the weights on original variables,
are column-wise invariant to a sign flipping. This is because flipping
of a principal direction does not influence the amount of the explained
variance by the component. If flip=TRUE
, then the columns of loadings
will be flip accordingly, such that each column is positive-skewed. This
means that for each column, the sum of cubic elements (i.e., sum(x^3)
)
are non-negative.
Value
an sma
object that contains:
z , b , t(y) |
the three parts in the SMA.
|
The row names of y
inherit the column names of x
.
score |
the total variance explained by the SMA. This is the optimal objective value obtained. |
n.iter |
|
References
Chen, F. and Rohe, K. (2020) "A New Basis for Sparse Principal Component Analysis."
See Also
Examples
## simulate a rank-5 data matrix with some additive Gaussian noise
n <- 300
p <- 50
k <- 5 ## rank
z <- shrinkage(polar(matrix(runif(n * k), n, k)), sqrt(n))
b <- diag(5) * 3
y <- shrinkage(polar(matrix(runif(p * k), p, k)), sqrt(p))
e <- matrix(rnorm(n * p, sd = .01), n, p)
x <- scale(z %*% b %*% t(y) + e)
## perform sparse matrix approximation
s.sma <- sma(x, k)
s.sma