Study on appropriateness of interrater chance-corrected agreement coefficients
Date Issued
2012
Date
2012
Author(s)
Kuo, Yan-Ling
Abstract
On behavioural research applications, it often needs to quantify the homogeneity of agreement between responses given by two (or more) raters or between two (or more) measurement devices. For a given object, it can receive different ratings from different raters. The reliability among raters becomes an important issue. In particular, investigators would like to know whether all raters classify objects in a consistent manner. Cohen (1960) proposed
kappa coefficient, κ, for correcting the chance agreement among two raters. κ is widely used in literature for quantifying agreement among the raters on a nominal scale. However, Cohen''s kappa coefficient has been criticized for the illness prevalence or base rate in the particular population under study or irrelevant of rater''s rating abilities for latent classes. Gwet (2008) proposed an alternative agreement based on interrater reliability called AC1 statistic, γ1.
De Mast (2007) suggested an appropriate chance-corrected interrater agreement coefficient κ* by correcting the agreement due to chance. In this thesis, we use asymptotic analysis to evaluate whether κ or γ1 is a consistent estimate of κ* when both raters adopt random rating model or Gwet''s model (2008) and compare the performances of κ and γ1 with κ*.
Subjects
interrater agreement coefficient
kappa coefficient
AC1 statistic
chance-corrected
latent classes
raters
random rating
Type
thesis
File(s)![Thumbnail Image]()
Loading...
Name
index.html
Size
23.49 KB
Format
HTML
Checksum
(MD5):7fda778c96b215f296e3091fb54a5124
