measr_extract {measr}R Documentation

Extract components of a measrfit object.

Description

Extract components of a measrfit object.

Extract components of an estimated diagnostic classification model

Usage

measr_extract(model, ...)

## S3 method for class 'measrdcm'
measr_extract(model, what, ...)

Arguments

model

The estimated to extract information from.

...

Additional arguments passed to each extract method.

  • ppmc_interval:

    For what = "odds_ratio_flags" and what = "conditional_prob_flags", the compatibility interval used for determining model fit flags to return. For example, a ppmc_interval of 0.95 (the default) will return any PPMCs where the posterior predictive p-value (ppp) is less than 0.025 or greater than 0.975.

  • agreement:

    For what = "classification_reliability", additional measures of agreement to include. By default, the classification accuracy and consistency metrics defined Johnson & Sinharay (2018) are returned. Additional metrics that can be specified to agreement are Goodman & Kruskal's lambda (lambda), Cohen's kappa (kappa), Youden's statistic (youden), the tetrachoric correlation (tetra), true positive rate (tp), and the true negative rate (tn).

    For what = "probability_reliability", additional measures of agreement to include. By default, the informational reliability index defined by Johnson & Sinharay (2020) is returned. Additional metrics that can be specified to agreement are the point biserial reliability index (bs), parallel forms reliability index (pf), and the tetrachoric reliability index (tb), which was originally defined by Templin & Bradshaw (2013).

what

Character string. The information to be extracted. See details for available options.

Details

For diagnostic classification models, we can extract the following information:

Value

The extracted information. The specific structure will vary depending on what is being extracted, but usually the returned object is a tibble with the requested information.

Methods (by class)

References

Cui, Y., Gierl, M. J., & Chang, H.-H. (2012). Estimating classification consistency and accuracy for cognitive diagnostic assessment. Journal of Educational Measurement, 49(1), 19-38. doi:10.1111/j.1745-3984.2011.00158.x

Johnson, M. S., & Sinharay, S. (2018). Measures of agreement to assess attribute-level classification accuracy and consistency for cognitive diagnostic assessments. Journal of Educational Measurement, 55(4), 635-664. doi:10.1111/jedm.12196

Johnson, M. S., & Sinharay, S. (2020). The reliability of the posterior probability of skill attainment in diagnostic classification models. Journal of Educational and Behavioral Statistics, 45(1), 5-31. doi:10.3102/1076998619864550

Templin, J., & Bradshaw, L. (2013). Measuring the reliability of diagnostic classification model examinee estimates. Journal of Classification, 30(2), 251-275. doi:10.1007/s00357-013-9129-4

Examples


rstn_mdm_lcdm <- measr_dcm(
  data = mdm_data, missing = NA, qmatrix = mdm_qmatrix,
  resp_id = "respondent", item_id = "item", type = "lcdm",
  method = "optim", seed = 63277, backend = "rstan"
)

measr_extract(rstn_mdm_lcdm, "strc_param")


[Package measr version 1.0.0 Index]