In this paper we rank accounting Ph.D. programs by evaluating the volume of research
published by each program’s graduates. We add to previous research by decomposing a single,
overall ranking to specific rankings for distinct topical areas AIS, audit, financial, managerial, and
tax and research methodologies archival, analytical, and experimental, and present ranking
results to provide useful information to accounting decision makers. We develop our rankings by
examining an index of 11 major academic accounting journals. Finally, to provide evidence of the
importance of disaggregated rankings, we correlate an overall, single ranking with the individual
topical and methodology rankings to ascertain the degree of correspondence between them.
The results highlight the Ph.D. research programs in each topical area and methodology that
produce the highest volume of alumni publications. Furthermore, the results suggest that the
disaggregated rankings are very important for identifying programs with particular topical area
and methodology strengths, as the correlations between overall rankings and specific topical areas
and methodologies vary significantly. Thus, our research adds an important extension relative to
past ranking studies and should be highly useful to multiple constituencies.
This study has several limitations. First, the rankings are based on historical data. Past performance
of graduates of doctoral programs may not accurately predict performance of future
graduates. For example, a school may lose strong researchers and mentors in a particular area such
that the school is no longer able to continue producing high-quality graduates like it once did. On
the other end of the spectrum, schools that have recently added Ph.D. programs, like Emory
University and Bentley University, may be able to produce strong researchers, but our rankings
will not reflect that because there is little or no history to evaluate graduate research performance.
Second, we are unable to scale our rankings by the number of graduates from a program. It may
be that some programs are able to produce excellent researchers who cause the school to be ranked
very highly, but at the same time these programs produce lots of graduates who are unsuccessful
researchers. We are unable to speak to the number of graduates who do not publish in our group
of journals. Finally, we recognize that without a validated weighting for coauthorship based on
empirical data, it is impossible to give partial credit to each author without introducing some level
of subjectivity or bias. If a validated and empirically proven weighting standard is developed in
the future, the rankings could be reevaluated based on that weighting.
Although these rankings are likely to be useful to Ph.D. applicants, we also provide several
important caveats for Ph.D. applicants to consider. First, the decision as to which program to
attend is multifaceted and complex. A simple research ranking does not take into account other