followed the successful path blazed over the previous two
decades by physics-education researchers. Physics-education
research (PER) has shown that interactive learning strategies
significantly improve student understanding. Astronomyeducation
research (AER) has begun to show that carefully
adapted versions of those research-validated learning strategies
can achieve large gains in the Astro 101 classroom. To
determine the effectiveness that new and innovative teaching
strategies are having on Astro 101 students, we have conducted
a national study involving nearly 4000 students at 31
colleges and universities. Before discussing the key results of
our study, we share some highlights from PER that have influenced
our work.
Physics education leads the way . . .
Over the past several decades, a number of highly effective
research and curriculum-development models have emerged
from the PER community.5 (See also the PHYSICS TODAY articles
by Edward Redish and Richard Steinberg, January 1999,
page 24, and by Carl Wieman and Katherine Perkins, November
2005, page 36.) Physics-education researchers have made
much progress toward determining what naive misconceptions
and reasoning difficulties students have in introductory
physics. The results of that research have been used to develop
curricula that specifically target those difficulties. The
most successful instructional strategies have focused on getting
students to become actively engaged in their own learning,
as opposed to passively listening to lectures.
A necessary step in the progress of PER was the creation
of research-validated assessment instruments that let instructors
measure the effectiveness of their instruction. Among the
first such assessment instruments was the widely adopted
Force Concept Inventory.6 The FCI is a collection of 30 multiple-choice
questions on the basic concepts of Newton’s laws.
They are designed to force students to choose between Newtonian
concepts and “common-sense” alternatives. The FCI
was widely adopted in the physics community because it focused
on a topic central to all first-term introductory courses,
and also because its simple design enabled instructors to easily
measure how much students gained in their understanding.
That wide use allowed Richard Hake in 1998 to report a
meta-study of FCI results from 6000 students enrolled in
classrooms all over the country.7 As a measure of student
learning in a particular course, Hake calculated the normalized
learning gain
g = (〈post%〉–〈pre%〉)/(100 – 〈pre%〉),
where 〈pre%〉 and 〈post%〉 are class-averaged scores in an