Or to take another example, doing analogy is a task that predicts grades in school fairly well. Again no one knows quite why because schoolwork ordinarily does not involve doing analogy. So psychologists have had to be security conscious for fear that if students got hold of the analogy test answers, they might practice and become good at analogy and "fake" high aptitude . What is meant by faking here is that doing well on analogy is not part of the criterion behavior (getting good grades), or else it could hardly be considered faking. Rather, the test must have some indirect connection with good grades, so that doing well on it through practice destroys its predictive power: hence the high score is a "fake." The person can do analogy but that does not mean any longer that he will get better grades. Put this way, the whole procedure seems like a strange charade that testers have engaged in because they did not know what was going on, behaviorally speaking, and refused to take the trouble to find out as long as the items "worked." How much simpler it is, both theoretically and pragmatically, to make explicit to the learner what the criterion behavior is that will be tested. Then psychologist, teacher, and student can collaborate openly in trying to improve the student's score on the performance test. Certain school achievement tests , of course, follow this model. In the Iowa Test of Basic Skills, for instance, both pupil and teacher know how the pupil will be tested on spelling, reading, or arithmetic, how he should prepare for the test, how the tests will be scored, etc. What is proposed here is that all tests should follow this model. To do otherwise is to engage in power games with applicants over the secrecy of answers and to pretend knowledge of what lies behind correlations, which does not in fact exist.