get.R2 {Qval} | R Documentation |
Calculate McFadden pseudo-R^{2}
Description
The function is able to calculate the McFadden pseudo-R^{2}
(R^{2}
) for all items after
fitting CDM
or directly.
Usage
get.R2(Y = NULL, Q = NULL, CDM.obj = NULL, model = "GDINA")
Arguments
Y |
A required |
Q |
A required binary |
CDM.obj |
An object of class |
model |
Type of model to fit; can be |
Details
The McFadden pseudo-R^{2}
( McFadden in 1974) serves as a definitive model-fit index,
quantifying the proportion of variance explained by the observed responses. Comparable to the
squared multiple-correlation coefficient in linear statistical models, this coefficient of
determination finds its application in logistic regression models. Specifically, in the context
of the CDM, where probabilities of accurate item responses are predicted for each examinee,
the McFadden pseudo-R^{2}
provides a metric to assess the alignment between these predictions
and the actual responses observed. Its computation is straightforward, following the formula:
R_{i}^{2} = 1 - \frac{\log(L_{im}}{\log(L_{i0})}
where \log(L_{im}
is the log-likelihood of the model, and \log(L_{i0})
is the log-likelihood of
the null model. If there were N
examinees taking a test comprising I
items, then \log(L_{im})
would be computed as:
\log(L_{im}) =
\sum_{p}^{N} \log \sum_{l=1}^{2^{K^\ast}} \pi(\alpha_{l}^{\ast} | X_{p})
P_{i}(\alpha_{l}^{\ast})^{X_{pi}} (1-P_{i}(\alpha_{l}^{\ast}))^{1-X_{pi}}
where \pi(\alpha_{l}^{\ast} | X_{p})
is the posterior probability of examinee p
with attribute
profle \alpha_{l}^{\ast}
when their response vector is \mathbf{X}_{p}
, and X_{pi}
is
examinee p
's response to item i
. Let X_{i}^{mean}
be the average probability of correctly responding
to item i
across all N
examinees; then \log(L_{i0}
could be computed as:
\log(L_{i0}) =
\sum_{p}^{N} \log {X_{i}^{mean}}^{X_{pi}} {(1-X_{i}^{mean})}^{1-X_{pi}}
Value
An object of class matrix
, which consisted of R^{2}
for each item and each possible attribute mastery pattern.
Author(s)
Haijiang Qin <Haijiang133@outlook.com>
References
McFadden, D. (1974). Conditional logit analysis of qualitative choice behavior. In P. Zarembka (Ed.), Frontiers in economics (pp.105–142). Academic Press.
Najera, P., Sorrel, M. A., de la Torre, J., & Abad, F. J. (2021). Balancing ft and parsimony to improve Q-matrix validation. British Journal of Mathematical and Statistical Psychology, 74, 110–130. DOI: 10.1111/bmsp.12228.
Qin, H., & Guo, L. (2023). Using machine learning to improve Q-matrix validation. Behavior Research Methods. DOI: 10.3758/s13428-023-02126-0.
See Also
Examples
library(Qval)
set.seed(123)
## generate Q-matrix and data
K <- 3
I <- 20
example.Q <- sim.Q(K, I)
IQ <- list(
P0 = runif(I, 0.0, 0.2),
P1 = runif(I, 0.8, 1.0)
)
example.data <- sim.data(Q = example.Q, N = 500, IQ = IQ, model = "GDINA", distribute = "horder")
## calculate PVAF directly
PVAF <-get.PVAF(Y = example.data$dat, Q = example.Q)
print(PVAF)
## caculate PVAF after fitting CDM
example.CDM.obj <- CDM(example.data$dat, example.Q, model="GDINA")
PVAF <-get.PVAF(CDM.obj = example.CDM.obj)
print(PVAF)