This study used meta-analytic methods to compare the interrater and intrarater reliabilities
of ratings of 10 dimensions of job performance used in the literature; ratings of
overall job performance were also examined. There was mixed support for the notion
that some dimensions are rated more reliably than others. Supervisory ratings appear to
have higher interrater reliability than peer ratings. Consistent with H. R. Rothstein
(1990), mean interrater reliability of supervisory ratings of overall job performance was
found to be .52. In all cases, interrater reliability is lower than intrarater reliability, indicating
that the inappropriate use of intrarater reliability estimates to correct for biases
from measurement error leads to biased research results. These findings have important
implications for both research and practice.