The role of evidence in public policy making has
been described and explained in widely varying
ways. These range from a direct role for research in
framing policy, and examples of evidence leading
directly to policy change, through to evidence
being considered an optional extra of marginal
relevance—part of a complex process in which
powerful vested interests have the most influence
over policy making. In these circumstances, evidence
is used selectively to justify predetermined
positions that are largely ideologically driven, or
used to achieve tactical advantages over political
opponents.1
Although there are obvious exceptions, neither
extremity is sustainable. Most policy change is
incremental and based upon a mix of influences.
When opportunities for policy reform open up,
policy makers will draw upon available evidence,
but policy change is often constrained significantly
by established structures, investments and interests.
2
Although the policy-making process is complex
and politicized, evidence can have an important
place. This is most likely if evidence is available
when needed, is communicated in terms that fit
with policy direction, and points to practical
actions.3 Disappointingly, public health research
frequently fails one or more of these tests.