kappam.fleiss {irr} | R Documentation |
Fleiss' Kappa for m raters
Description
Computes Fleiss' Kappa as an index of interrater agreement between m raters on categorical data. Additionally, category-wise Kappas could be computed.
Usage
kappam.fleiss(ratings, exact = FALSE, detail = FALSE)
Arguments
ratings |
n*m matrix or dataframe, n subjects m raters. |
exact |
a logical indicating whether the exact Kappa (Conger, 1980) or the Kappa described by Fleiss (1971) should be computed. |
detail |
a logical indicating whether category-wise Kappas should be computed |
Details
Missing data are omitted in a listwise way.
The coefficient described by Fleiss (1971) does not reduce to Cohen's Kappa (unweighted) for m=2 raters. Therefore, the exact Kappa coefficient, which is slightly higher in most cases, was proposed by Conger (1980).
The null hypothesis Kappa=0 could only be tested using Fleiss' formulation of Kappa.
Value
A list with class '"irrlist"' containing the following components:
$method |
a character string describing the method applied for the computation of interrater reliability. |
$subjects |
the number of subjects examined. |
$raters |
the number of raters. |
$irr.name |
a character string specifying the name of the coefficient. |
$value |
value of Kappa. |
$stat.name |
a character string specifying the name of the corresponding test statistic. |
$statistic |
the value of the test statistic. |
$p.value |
the p-value for the test. |
$detail |
a table with category-wise kappas and the corresponding test statistics. |
Author(s)
Matthias Gamer
References
Conger, A.J. (1980). Integration and generalisation of Kappas for multiple raters. Psychological Bulletin, 88, 322-328.
Fleiss, J.L. (1971). Measuring nominal scale agreement among many raters. Psychological Bulletin, 76, 378-382.
Fleiss, J.L., Levin, B., & Paik, M.C. (2003). Statistical Methods for Rates and Proportions, 3rd Edition. New York: John Wiley & Sons.
See Also
Examples
data(diagnoses)
kappam.fleiss(diagnoses) # Fleiss' Kappa
kappam.fleiss(diagnoses, exact=TRUE) # Exact Kappa
kappam.fleiss(diagnoses, detail=TRUE) # Fleiss' and category-wise Kappa
kappam.fleiss(diagnoses[,1:4]) # Fleiss' Kappa of raters 1 to 4