jmgirard/mReliability

Can you explain how specific agreement coefficient is derived?

Closed this issue · 2 comments

image
I do not understand how you derive the denominator for this equation on specific agreement.

image
In another post you show this equation for observed agreement which is similar to calculating the mean proportion p̄ in Fleiss' kappa value. For observed agreement, I understand that you are calculating the rater-rater pairs that agree and comparing them to all possible rater-rater pairs.
https://en.wikipedia.org/wiki/Fleiss%27_kappa

I just don't know where the denominator for the specific agreement equation comes from and I cannot find a place where this specific agreement formula for more than 2 categories is explained.

Any help in guiding me to a reference that explains the derivation would be very much appreciated. Also, perhaps this could be added to the page describing the specific agreement coefficient.

I would point you to Appendix 1 of: Uebersax, J. S. (1982). A design-independent method for measuring the reliability of psychiatric diagnosis. Journal of Psychiatric Research, 17(4), 335–342.

The numerator is the number of rater-pairs that agreed conditional on one rater assigning the object to category k. The denominator is the number of rater-pairs that could have agreed conditional on one rater assigning the object to category k.

Thank you! Your explanation and the article was very helpful.