marginalize some types of research or engagement with practice. For the academic institution, it would be regrettable if, in
the pursuit of excellence, we see the development of ‘average universities’ or ‘average business schools’, where, due to
isomorphism, a distinctive profile and expertise are replaced by a standard approach to doing research. Finally, for the
accounting academe as a whole, the regret would be to see a far less pluralistic and interesting picture of accounting
knowledge and a development toward an isolation of accounting scholars from the outside world.
In a context of strong institutional pressures for conformity, the realization of such a balanced approach to research
orientation is no easy task – neither for those who design research policies or define which criteria to use for research
evaluation; nor for those who are supposed to ‘deliver’. It is most probably a task that will require both pressure from ‘the
bottom’ and responsible conduct at ‘the top’. And it is a problem that needs to be considered both from a technical and a
cultural dimension.
In technical terms, the main challenge is to try and make sure that the control and evaluation systems in place do not
equate research quality only with the realization of particular variants of research. This does not mean that one has to
subscribe to a particularistic understanding of quality which would render any comparative appreciation of quality
impossible. But some acknowledgment of the contingency of research quality – in terms of different research approaches,
methodologies, or traditions – would seem more appropriate than a universal definition of what is ‘good’ research. As
mentioned before, this is no easy task given that journal rankings, which nowadays dominate the evaluation of research, are
hardly ever established on the basis of promoting such diversity. It would therefore appear important to at least question and
challenge imbalances and biases in existing rankings and to defend the case for a multitude of voices. This does not entirely
solve the problem of inadequate recognition of diversity, but it alleviates it in an important way.
At the same time, there is a need to look beyond rankings and to allow for alternative representations of research quality.
Rather than ‘‘continuing to hope that a single metric might adequately reflect quality’’ – as the reliance on a ranking implies –
we ‘‘need to collectively generate a wider array of appropriate approaches to recognize quality’’ (Adler and Harzing, 2009, p.
90). This may include, for example, recognition for research that is particularly creative, innovative, relevant to practice,
important for a local community, etc. If we want academics to think ‘‘outside the box’’, we need to give them credit for
exercising such ‘‘dissent’’ (Young, 2009). Such recognition is not at all limited to journal papers, but could include books and
research monographs, conference presentations and other publication forms. Individual academics can proactively create
such accounts and make them visible to the relevant institutions – universities, funding bodies, evaluation agencies, etc. It is
then up to these institutions to include such accounts when making sense of research performance.
Finally, let us not forget that our dedication to our work is not necessarily (or primarily) a function of how tight the
accountability and control mechanisms are. Evaluation bodies, committees, rankings and incentive systems can play an
important role in motivating and guiding academics. Yet, their ubiquitous existence may easily lead us to forget that many of
us are intrinsically motivated to dedicate our time and effort to interesting research questions. Research orientation was
certainly not absent before the introduction of such control mechanisms. For centuries, scientists, social theorists and
philosophers have conducted path-breaking research without any such mechanisms in place. In other words, the technical
question of how to best design such systems only partly addresses what is really at stake.10