Another technique is the Kappa statistic (Khat), which compares two datasets to see if they differ significantly. It looks at the agreement or accuracy between the classified map and the reference data. The Kappa statistic utilizes all the data in the matrix rather than just the diagonal data in the error matrix. The Kappa statistic ranges from 0 to 1. A value close to 1 indicates a high agreement between the two datasets (Congalton, 1991; Congalton and Green 1999).