argument (American Educational Research Association et al. 1999; Cronbach 1988; Kane
2001; Messick 1989; Mislevy 1996) implicitly associate forms of evidence with single
cornerstones and the assessment stages they mediate. For example, consider the
development of a typical assessment consisting of a set of multiple-choice items. The
scoring guide for each item is the list of possible choices—the correct answer and the
distracters—presented to the student. This list is typically generated with the items, and
only the items, in mind, drawing upon a task decomposition provided by content experts
and/or actual student responses to the items. Consideration is given to whether the scoring
guide accurately represents the sorts of things that students are likely to say and whether
those responses accurately indicate the presumed underlying cognitive processes. In the
most recent Standards for educational and psychological testing (American Educational
Research Association et al. 1999), this is referred to as evidence based on response
processes. In the eight-stage validation model of Crooks et al. (1996), this is referred to as
the scoring link (see also Kane 2006). In Messick’s (1995) construct framework, this is
referred to as the substantive aspect. During this stage in the development and validation
cycle, the connection between the tasks and the scoring guides is foregrounded, while the
measurement model and the model of cognition are backgrounded.