epiKappa {epibasix}R Documentation

Computation of the Kappa Statistic for Agreement Between Two Raters

Description

Computes the Kappa Statistic for agreement between Two Raters, performs Hypothesis tests and calculates Confidence Intervals.

Usage

epiKappa(C, alpha=0.05, k0=0.4, digits=3)

Arguments

C

An nxn classification matrix or matrix of proportions.

k0

The Null hypothesis, kappa0 = k0

alpha

The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals

digits

Number of Digits to round calculations

Details

The Kappa statistic is used to measure agreement between two raters. For simplicity, consider the case where each rater can classify an object as Type I, or Type II. Then, the diagonal elements of a 2x2 matrix are the agreeing elements, that is where both raters classify an object as Type I or Type II. The discordant observations are on the off-diagonal. Note that the alternative hypothesis is always greater then, as we are interested in whether kappa exceeds a certain threshold, such as 0.4, for Fair agreement.

Value

kappa

The computation of the kappa statistic.

seh

The standard error computed under H0

seC

The standard error as computed for Confidence Intervals

CIL

Lower Confidence Limit for \kappa

CIU

Upper Confidence Limit for \kappa

Z

Hypothesis Test Statistic, \kappa = K0 = K0 vs. \kappa > K0

p.value

P-Value for hypothesis test

Data

Returns the original matrix of agreement.

k0

The Null hypothesis, kappa = k0

alpha

The desired Type I Error Rate for Hypothesis Tests and Confidence Intervals

digits

Number of Digits to round calculations

Author(s)

Michael Rotondi, mrotondi@yorku.ca

References

Szklo M and Nieto FJ. Epidemiology: Beyond the Basics, Jones and Bartlett: Boston, 2007.

Fleiss J. Statistical Methods for Rates and Proportions, 2nd ed. New York: John Wiley and Sons; 1981.

See Also

sensSpec

Examples

X <- cbind(c(28,5), c(4,61));
summary(epiKappa(X, alpha=0.05, k0 = 0.6));

[Package epibasix version 1.5 Index]