CompareTests-package {CompareTests}R Documentation

Correct for Verification Bias in Diagnostic Accuracy & Agreement


A standard test is observed on all specimens. We treat the second test (or sampled test) as being conducted on only a stratified sample of specimens. Verification Bias is this situation when the specimens for doing the second (sampled) test is not under investigator control. We treat the total sample as stratified two-phase sampling and use inverse probability weighting. We estimate diagnostic accuracy (category-specific classification probabilities; for binary tests reduces to specificity and sensitivity) and agreement statistics (percent agreement, percent agreement by category, Kappa (unweighted), Kappa (quadratic weighted) and symmetry test (reduces to McNemar's test for binary tests)).


Package: CompareTests
Type: Package
Version: 1.1
Date: 2015-06-19
License: GPL-3
LazyLoad: yes

You have a dataframe with columns "stdtest" (no NAs allowed; all specimens with NA stdtest results are dropped), "sampledtest" (a gold standard which is NA for some specimens), sampling strata "strata1" "strata2" (values cannot be missing for any specimens). Correct for Verification Bias in the diagnostic and agreement statistics with CompareTests(stdtest,sampledtest,interaction(strata1,strata2),goldstd="sampledtest")


Hormuzd A. Katki and David W. Edelstein

Maintainer: Hormuzd Katki <>


Katki HA, Li Y, Edelstein DW, Castle PE. Estimating the agreement and diagnostic accuracy of two diagnostic tests when one test is conducted on only a subsample of specimens. Stat Med. 2012 Feb 28; 31(5): 10.1002/sim.4422.


# Get specimens dataset

# Get diagnostic and agreement statistics if sampledtest is the gold standard

# Get diagnostic and agreement statistics if stdtest is the gold standard

# Get agreement statistics if neither test is a gold standard

[Package CompareTests version 1.2 Index]