This paper is, in part, a sequel to an earlier work by the senior author (Byrd, 2007) that examined
the lack of reporting confidence intervals, and effect size in the Educational Administration
Quarterly.Further investigation revealed that the statistical dilemma was not limited to a
particular journal but rather the lack of reporting confidence intervals and effect size is an issue
that is prevalent within the field. The findings of the present study, which include an
examination of the reporting or confidence intervals, and effect size for quantitative studies in
theJournal of Educational Administration (JEA),are consistent with those from theEAQ.These
further underscore the widespread failure to place statistical results in the proper context. focused on a qualitative research with authors attempting to generalize findings from
extremely small sample sizes to extremely large target populations. Even more
disheartening is the failure of authors to properly interpret statistically significant
results from quantitative studies. While the focus of this paper is not to argue for or
against a research paradigm, it is the intent of this paper to focus on properly
interpreting the findings, especially from a quantitative stance since most university
preparation programs fail to properly cover this subject (Thompson, 1999).