The reliability of peer assessments is determined by the degree of inter-rater agreement. Most studies report high reliability coefficients (in the .80s and .90s), indicating that peers agree about the job performance of group members. The validity of peer assessments is determined by correlating them with criterion measures usually made later, such as who successfully completed a training program, who got promoted first, the size of raises, and so on. What is uncanny is that group members who have known one another a relatively short time (two to three weeks) can be quite accurate in their long-term predictions about one another. Validity coefficients are impressive, commonly in the .40--.50 range. The peer nomination technique appears best in identifying people who have extreme levels of attributes as compared with other members of the group. Peer ratings are used most often but have only marginal empirical support. It has been suggested that their use be limited to giving feedback to employees on how others perceive them. Relatively few data are available on the value of peer rankings, although they may be the best method for assessing overall job performance.