Learning Progressions and Cognitive Models
Cognitive models for learning seek to explain and predict student
performance on assessment tasks in terms of profiles of student
skills and corresponding task requirements. If a student has mastered
all of the attributes required by the task, we would expect the
student to be consistently successful in performing the task; if the
student has not mastered all of the attributes required by the task,
we would expect the student to be less successful, or completely
unsuccessful in performing the task, or to perform at some chance
level, depending on the assumptions built into the model (de la Torre
& Minchen, this issue). An assessment involving a number of tasks
with different attribute requirements can then be used to identify
the attributes that have been mastered by the student and those that
have not been mastered. It is easy to see how this kind of information
could be useful to teachers and students.
Identifying the person and task attributes that are most relevant
to a discipline is potentially a labor intensive activity, as is the
development of an appropriate statistical model for specifying the
relationship between the attributes mastered by a student, the
attributes required by a task, and the expected performance of the
student on the task. There are also questions about how large the
domain being modeled should be (the domain size) and how general
or specific the attributes should be (the attributes’ grain size). As de
la Torre and Minchen (this issue) point out, given that we cannot
have more than five to ten attributes in the statistical model without
running into serious problems in estimation, there is a tradeoff
between the domain size and the grain size but there is also a need
to insure that the attributes being assessed are the most relevant
given the purpose of an assessment. The attributes in a cognitive
model are not necessarily ordered or hierarchical, but they can be;
that is, the assumption that mastery of one attribute is a prerequisite
for another attribute can be built into the model as a constraint.
Model-based assessments can provide relatively detailed
information on the attributes (e.g., skills and conceptual
understandings) that each student has mastered and not mastered,
and with a small grain size, this information can be quite detailed.
Such specific indications of the weaknesses in a student’s mastery
of a topic can be used to target instruction on those soft spots. With
a larger grain size more general guidance can be obtained. But there
is no such thing as a free lunch. In order to realize these benefits to
a substantial degree, it is necessary that the model fit the data and
that it provide a coherent and instructionally relevant explanation
Learning Progressions and Cognitive Models
Cognitive models for learning seek to explain and predict student
performance on assessment tasks in terms of profiles of student
skills and corresponding task requirements. If a student has mastered
all of the attributes required by the task, we would expect the
student to be consistently successful in performing the task; if the
student has not mastered all of the attributes required by the task,
we would expect the student to be less successful, or completely
unsuccessful in performing the task, or to perform at some chance
level, depending on the assumptions built into the model (de la Torre
& Minchen, this issue). An assessment involving a number of tasks
with different attribute requirements can then be used to identify
the attributes that have been mastered by the student and those that
have not been mastered. It is easy to see how this kind of information
could be useful to teachers and students.
Identifying the person and task attributes that are most relevant
to a discipline is potentially a labor intensive activity, as is the
development of an appropriate statistical model for specifying the
relationship between the attributes mastered by a student, the
attributes required by a task, and the expected performance of the
student on the task. There are also questions about how large the
domain being modeled should be (the domain size) and how general
or specific the attributes should be (the attributes’ grain size). As de
la Torre and Minchen (this issue) point out, given that we cannot
have more than five to ten attributes in the statistical model without
running into serious problems in estimation, there is a tradeoff
between the domain size and the grain size but there is also a need
to insure that the attributes being assessed are the most relevant
given the purpose of an assessment. The attributes in a cognitive
model are not necessarily ordered or hierarchical, but they can be;
that is, the assumption that mastery of one attribute is a prerequisite
for another attribute can be built into the model as a constraint.
Model-based assessments can provide relatively detailed
information on the attributes (e.g., skills and conceptual
understandings) that each student has mastered and not mastered,
and with a small grain size, this information can be quite detailed.
Such specific indications of the weaknesses in a student’s mastery
of a topic can be used to target instruction on those soft spots. With
a larger grain size more general guidance can be obtained. But there
is no such thing as a free lunch. In order to realize these benefits to
a substantial degree, it is necessary that the model fit the data and
that it provide a coherent and instructionally relevant explanation
การแปล กรุณารอสักครู่..
