Scientific Reasoning as a Multidimensional Proficiency
An issue that invariably arises when we work with teachers around scoring student work is what and how many aspects of a student’s response should be taken into account when assigning scores. Often, this discussion centers on whether the student has the right idea and/or whether they can express that idea in correct scientific language. In our experience, teachers have difficulty coming to a consensus on how to define these characteristics, how to locate evidence of them in student work, and their relative importance.
The EBRAS sheds light on these issues by disentangling and defining in detail the profi-ciencies that contribute to students’ use of evidence in scientific reasoning. The constructs of conceptual sophistication, specificity, and validity are an alternative framework for evaluating student work, one that is more accurate, precise, and meaningful than characterizing whether a student has the right idea or is using correct scientific terminology. Having the right idea most likely depends on both conceptual sophistication (which characterizes conceptual understanding and diagnoses the presence of misconceptions) and validity (which characterizes whether those concepts are correctly applied to the situation at hand). Using correct scientific terminology most likely depends on specificity (which captures whether conditions of applicability are clearly defined) but also conceptual sophistication (as the conceptual sophistication outcome space relies on the concepts named in student responses to determine levels of conceptual sophistication). That the three EBRAS constructs are confounded in the characterizations of both having the right idea and using correct scientific terminology helps to explain why the evaluation of students’ scientific reasoning by teachers has been, in our experience, contentious and difficult.
Accurate performance on traditional forced-choice science assessment items is positively predicted about equally well by the conceptual sophistication and validity proficiencies (see Table 11). Yet, together, these two proficiencies explain only 47% of the variation in accuracy. That other proficiencies would underlie accurate performance on this type of item is not surpris-ing; what is notable is that so much of the variance is unexplained. Although further research may discover and define additional proficiencies related to scientific reasoning, the magnitude of this unexplained variance suggests that some proficiencies not related to scientific reasoning (perhaps, e.g., test-taking strategies or cognitive processes of recognition and recall) must also explain substantial amounts of the variance. This speaks strongly against the interpretation of accurate performance on traditional forced-choice science assessment items as being a good indicator of proficient scientific reasoning. Furthermore, scientific reasoning has been shown to involve at least one proficiency, specificity, that (a) can be reliably measured given this data, (b) is not very strongly correlated (r < .9) with the other dimensions, (c) yet does not explain a statistically significant amount of the variation in accuracy (p D .92). These results nicely illustrate the stance of the EBR Framework (Brown et al., this issue) that accuracy is best thought of as one but not the only outcome of a scientific reasoning process that depends on multiple, separable proficiencies. Based on the EBR Framework, the EBRAS is a tool that allows the characterization of students’ use of evidence in deep, rigorous, and multidimensional terms, making observable the quality of individual components and processes involved in scientific reasoning. Such a characterization is complementary, but not equivalent, to a determination of whether a student’s claims are true or false.