When two binary variables are attempts by two individuals to measure the same thing, you can use Cohen's Kappa (often simply called Kappa) as a measure of agreement between the two individuals.
Kappa measures the percentage of data values in the main diagonal of the table and then adjusts these values for the amount of agreement that could be expected due to chance alone.
Two raters are asked to classify objects into categories 1 and 2. The table below contains cell probabilities for a 2 by 2 table.