do.bpca {Rdimtools} | R Documentation |
Bayesian Principal Component Analysis
Description
Bayesian PCA (BPCA) is a further variant of PCA in that it imposes prior and encodes
basis selection mechanism. Even though the model is fully Bayesian, do.bpca
faithfully follows the original paper by Bishop in that it only returns the mode value
of posterior as an estimate, in conjunction with ARD-motivated prior as well as
consideration of variance to be estimated. Unlike PPCA, it uses full basis and returns
relative weight for each base in that the smaller \alpha
value is, the more likely
corresponding column vector of mp.W
to be selected as potential basis.
Usage
do.bpca(X, ndim = 2, ...)
Arguments
X |
an |
ndim |
an integer-valued target dimension. |
... |
extra parameters including
|
Value
a named Rdimtools
S3 object containing
- Y
an
(n\times ndim)
matrix whose rows are embedded observations.- projection
a
(p\times ndim)
whose columns are basis for projection.- mp.itercount
the number of iterations taken for EM algorithm to converge.
- mp.sigma2
estimated
\sigma^2
value via EM algorithm.- mp.alpha
length-
ndim-1
vector of relative weight for each base inmp.W
.- mp.W
an
(ndim\times ndim-1)
matrix from EM update.- algorithm
name of the algorithm.
Author(s)
Kisung You
References
Bishop C (1999). “Bayesian PCA.” In Advances in Neural Information Processing Systems, volume 11, 382–388.
See Also
Examples
## Not run:
## use iris dataset
data(iris)
set.seed(100)
subid = sample(1:150,50)
X = as.matrix(iris[subid,1:4])
lab = as.factor(iris[subid,5])
## compare BPCA with others
out1 <- do.bpca(X, ndim=2)
out2 <- do.pca(X, ndim=2)
out3 <- do.lda(X, lab, ndim=2)
## visualize
opar <- par(no.readonly=TRUE)
par(mfrow=c(1,3))
plot(out1$Y, col=lab, pch=19, cex=0.8, main="Bayesian PCA")
plot(out2$Y, col=lab, pch=19, cex=0.8, main="PCA")
plot(out3$Y, col=lab, pch=19, cex=0.8, main="LDA")
par(opar)
## End(Not run)