It is well known that kappa values depend on the prevalence of the attribute measured and can show biased results with skewed marginal distributions. Consequently, kappa values for various ICF categoriese cannot be compared with each other properly because their baseline prevalences are unknown and their marginal distributions may be more or less skewed. Thus, in the present study, only information about whether kappa values exceed chance or not was used for comparisons across ICF categories.The percentage of observed rater agreement was preferred as the indicator for the level of agreement. Emphasizing the actual observed agreement can be further justified because the kappa statistic is a chance corrected measure of agreement , but the definite role of chance in the rating process is not clear