test_icr {tidycomm} | R Documentation |
Perform an intercoder reliability test
Description
Performs an intercoder reliability test by computing various intercoder reliability estimates for the included variables
Usage
test_icr(
data,
unit_var,
coder_var,
...,
levels = NULL,
na.omit = FALSE,
agreement = TRUE,
holsti = TRUE,
kripp_alpha = TRUE,
cohens_kappa = FALSE,
fleiss_kappa = FALSE,
brennan_prediger = FALSE,
lotus = FALSE,
s_lotus = FALSE
)
Arguments
data |
|
unit_var |
Variable with unit identifiers |
coder_var |
Variable with coder identifiers |
... |
Variables to compute intercoder reliability estimates for. Leave
empty to compute for all variables (excluding |
levels |
Optional named vector with levels of test variables |
na.omit |
Logical indicating whether |
agreement |
Logical indicating whether simple percent agreement should
be computed. Defaults to |
holsti |
Logical indicating whether Holsti's reliability estimate
(mean pairwise agreement) should be computed. Defaults to |
kripp_alpha |
Logical indicating whether Krippendorff's Alpha should
be computed. Defaults to |
cohens_kappa |
Logical indicating whether Cohen's Kappa should
be computed. Defaults to |
fleiss_kappa |
Logical indicating whether Fleiss' Kappa should
be computed. Defaults to |
brennan_prediger |
Logical indicating whether Brennan & Prediger's Kappa
should be computed (extension to 3+ coders as proposed by von Eye (2006)).
Defaults to |
lotus |
Logical indicating whether Fretwurst's Lotus should be
computed. Defaults to |
s_lotus |
Logical indicating whether Fretwurst's standardized Lotus
(S-Lotus) should be computed. Defaults to |
Value
a tdcmm model
References
Brennan, R. L., & Prediger, D. J. (1981). Coefficient Kappa: Some uses, misuses, and alternatives. Educational and Psychological Measurement, 41(3), 687-699. https://doi.org/10.1177/001316448104100307
Cohen, J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1), 37-46. https://doi.org/10.1177/001316446002000104
Fleiss, J. L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76(5), 378-382. https://doi.org/10.1037/h0031619
Fretwurst, B. (2015). Reliabilität und Validität von Inhaltsanalysen. Mit Erläuterungen zur Berechnung des Reliabilitätskoeffizienten „Lotus“ mit SPSS. In W. Wirth, K. Sommer, M. Wettstein, & J. Matthes (Ed.), Qualitätskriterien in der Inhaltsanalyse (S. 176–203). Herbert von Halem.
Krippendorff, K. (2011). Computing Krippendorff's Alpha-Reliability. Retrieved from http://repository.upenn.edu/asc_papers/43
von Eye, A. (2006). An Alternative to Cohen's Kappa. European Psychologist, 11(1), 12-24. https://doi.org/10.1027/1016-9040.11.1.12
Examples
fbposts %>% test_icr(post_id, coder_id, pop_elite, pop_othering)
fbposts %>% test_icr(post_id, coder_id, levels = c(n_pictures = "ordinal"), fleiss_kappa = TRUE)