The statistic kappa was introduce to measure nominal scale agreement between a fixed pair of raters. In this paper kappa is generalized to the case where each of a sample of subjects is rated on a nominal scale by the same number of raters, but where the raters rating one subject are not necessarily the same as those rating another. Large sample standard errors are derived, and a numerical example is given.