lda {rchemo} | R Documentation |
LDA and QDA
Description
Probabilistic (parametric) linear and quadratic discriminant analysis.
Usage
lda(X, y, prior = c("unif", "prop"))
qda(X, y, prior = c("unif", "prop"))
## S3 method for class 'Lda'
predict(object, X, ...)
## S3 method for class 'Qda'
predict(object, X, ...)
Arguments
X |
For the main functions: Training X-data ( |
y |
Training class membership ( |
prior |
The prior probabilities of the classes. Possible values are "unif" (default; probabilities are set equal for all the classes) or "prop" (probabilities are set equal to the observed proportions of the classes in |
object |
For the auxiliary functions: A fitted model, output of a call to the main functions. |
... |
For the auxiliary functions: Optional arguments. Not used. |
Details
For each observation to predict, the posterior probability to belong to a given class is estimated using the Bayes' formula, assuming priors (proportional or uniform) and a multivariate Normal distribution for the dependent variables X
. The prediction is the class with the highest posterior probability.
LDA assumes homogeneous X-
covariance matrices for the classes while QDA assumes different covariance matrices. The functions use dmnorm
for estimating the multivariate Normal densities.
Value
For lda
and qda
:
ct |
centers (column-wise means) for classes of observations. |
W |
unbiased within covariance matrices for classes of observations. |
wprior |
prior probabilities of the classes. |
lev |
y levels. |
ni |
number of observations by level of y. |
For predict.Lda
and predict.Qda
:
pred |
predicted classes of observations. |
ds |
Prediction of the normal probability density. |
posterior |
posterior probabilities of the classes. |
References
Saporta, G., 2011. Probabilités analyse des données et statistique. Editions Technip, Paris, France.
Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth edition. Springer.
Examples
## EXAMPLE 1
data(iris)
X <- iris[, 1:4]
y <- iris[, 5]
N <- nrow(X)
nTest <- round(.25 * N)
nTraining <- N - nTest
s <- sample(1:N, nTest)
Xtrain <- X[-s, ]
ytrain <- y[-s]
Xtest <- X[s, ]
ytest <- y[s]
prior <- "unif"
fm <- lda(Xtrain, ytrain, prior = prior)
res <- predict(fm, Xtest)
names(res)
headm(res$pred)
headm(res$ds)
headm(res$posterior)
err(res$pred, ytest)
## EXAMPLE 2
data(iris)
X <- iris[, 1:4]
y <- iris[, 5]
N <- nrow(X)
nTest <- round(.25 * N)
nTraining <- N - nTest
s <- sample(1:N, nTest)
Xtrain <- X[-s, ]
ytrain <- y[-s]
Xtest <- X[s, ]
ytest <- y[s]
prior <- "prop"
fm <- lda(Xtrain, ytrain, prior = prior)
res <- predict(fm, Xtest)
names(res)
headm(res$pred)
headm(res$ds)
headm(res$posterior)
err(res$pred, ytest)