ai {AgreementInterval} | R Documentation |
ai
Description
Calculate Agreement Interval of Two Measurement Methods and quantify the agreement
Usage
ai(x, y, lambda = 1, alpha = 0.05, clin.limit = NA)
Arguments
x |
A continous numeric vector from measurement method 1 |
y |
A continous numeric vector from measurement method 2, the same length as x. |
lambda |
Reliability ratio of x vs y. default 1. |
alpha |
Discordance rate to estimate confidence interval |
clin.limit |
Clinically meaningful limit (optional) |
Details
This is the function to calculate agreement interval (confidence interval) of two continuous numerical vectors from two measurement methods on the same samples. Note that this function only works for scenario with two evaluators, for example, comparing the concordance between two evaluators. We are working on the scenario with more than two evaluators.
The two numerical vectors are x
and y
. It also provides commonly used measures based on index approaches,
for example, Pearson's correlation coefficient, the intraclass correlation coefficient (ICC),
the concordance correlation coefficient (Lin's CCC), and improved CCC (Liao's ICCC).
Value
Function ai returns an object of class "ai".
An object of class "ai" is a list containing the following components:
alpha: Alpha input for confidence interval estimates
n: Sample size
conf.level: Confidence level calculated from alpha
lambda: Reliability ratio input of x vs y
summaryStat: Summary statistics of input data
sigma.e: Random error estimates
indexEst: Agreement estimates (CI.) based on index approaches
intervalEst: Agreement estimates (CI.) based on interval approaches
biasEst: Bias estimate
intercept: Intercept of linear regression line from measure error model
slope: Slope of linear regression line from measure error model
x.name: x variable name extracted from input, used for plotting
y.name: y variable name extracted from input, used for plotting
tolProb.cl: Tolrance probability calculated based on optional clinically meaningful limit
k.cl: Number of discordance pairs based on optional clinically meaningful limit
alpha.cl: Discordance rate based on clinically meaningful limit
Author(s)
Jialin Xu, Jason Liao
References
Luiz RR, Costa AJL, Kale PL, Werneck GL. Assessment of agreement of a quantitative variable: a new graphical approach. J Clin Epidemiol 2003; 56:963-7.
Jason J. Z. Liao, Quantifying an Agreement Study, Int. J. Biostat. 2015; 11(1): 125-133
Shrout, Patrick E. and Fleiss, Joseph L. Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 1979, 86, 420-3428.
Lin L-K., A Concordance Correlation Coefficient to Evaluate Reproducibility. Biometrics 1989; 45:255-68
Liao JJ. An Improved Concordance Correlation Coefficient. Pharm Stat 2003; 2:253-61
Nicole Jill-Marie Blackman, Reproducibility of Clinical Data I: Continuous Outcomes, Pharm Stat 2004; 3:99-108
Examples
ai(x=1:4, y=c(1, 1, 2, 4))
a <- c(1, 2, 3, 4, 7)
b <- c(1, 3, 2, 5, 3)
ai(x=a, y=b)
ai(x=IPIA$Tomography, y=IPIA$Urography)
ai(x=IPIA$Tomography, y=IPIA$Urography, clin.limit=c(-15, 15))