To further investigate this relationship, we use the number of books a child reports
reading in the last month in school as a proxy for the time teachers spend on reading. We then
estimate local average treatment effects of reading on students' reading test scores.12 If the
decline in test scores resulted from the reduction in the time teachers spent on reading, then the
coefficient on the LATE estimate should be similar for both surveys. This is, in fact, the case.
The estimates are 0.017 (p-value 0.017) and 0.020 standard deviations per book (p-value 0.056)
for the first and second surveys respectively.13 This suggests that the effect of the curriculum
change remained consistent across the two periods and that the decline in test scores was due to
the reduced focus on children reading after the read-a-thon period.
We also investigate differences in the observed treatment effects for a number of subsets
of our sample defined through the baseline survey. In results not presented in this manuscript,14
we test for differences in treatment effects by gender, age, language spoken at home, and
baseline reading score. We find almost no evidence of systematically different treatment effects
for different types of students for either follow-up period. The one exception is that we find that,
for the first follow-up period, the treatment effect increases with students’ baseline test scores. In
a regression interacting treatment effect with baseline score, we find that students experienced a
0.12 standard deviation increase at the control baseline mean (statistically significant at the 1
percent level) and then experienced an increased effect of 0.09 standard deviations for each
additional standard deviation they scored at baseline (significant at the 10 percent level). While
both coefficients are still positive at the second follow-up, the magnitudes are much smaller
To further investigate this relationship, we use the number of books a child reportsreading in the last month in school as a proxy for the time teachers spend on reading. We thenestimate local average treatment effects of reading on students' reading test scores.12 If thedecline in test scores resulted from the reduction in the time teachers spent on reading, then thecoefficient on the LATE estimate should be similar for both surveys. This is, in fact, the case.The estimates are 0.017 (p-value 0.017) and 0.020 standard deviations per book (p-value 0.056)for the first and second surveys respectively.13 This suggests that the effect of the curriculumchange remained consistent across the two periods and that the decline in test scores was due tothe reduced focus on children reading after the read-a-thon period.We also investigate differences in the observed treatment effects for a number of subsetsof our sample defined through the baseline survey. In results not presented in this manuscript,14we test for differences in treatment effects by gender, age, language spoken at home, andbaseline reading score. We find almost no evidence of systematically different treatment effectsfor different types of students for either follow-up period. The one exception is that we find that,for the first follow-up period, the treatment effect increases with students’ baseline test scores. Ina regression interacting treatment effect with baseline score, we find that students experienced a0.12 standard deviation increase at the control baseline mean (statistically significant at the 1percent level) and then experienced an increased effect of 0.09 standard deviations for eachadditional standard deviation they scored at baseline (significant at the 10 percent level). Whileboth coefficients are still positive at the second follow-up, the magnitudes are much smaller
การแปล กรุณารอสักครู่..
