Direct assessment of writing skill, usually considered to be
synonymous with assessment by means of writing samples,
is reviewed in terms of its history and with respect to evidence
of its reliability and validity. Reliability is examined as
it is influenced by reader inconsistency, domain sampling,
and other sources of error. Validity evidence is presented,
which shows reported relationships between direct assessment
scores and criteria such as class rank, English course
grades, and instructors' ratings of writing ability. Evidence
on the incremental validity of direct assessment over and
above other available measures is also given. It is concluded
that direct assessment makes a contribution but that methods
need to be developed to improve its reliability and reduce its
costs. New automated methods of textual analysis and new
kinds of direct assessment in which more than a single score
is produced are suggested as two approaches to better direct
assessmen