There is another risk with the use of a quasi-autonomous board: that
its work will be ignored by both the executive and the legislative branches
and have no real effect on budgetary decisions. A recent commentary on the
Oregon experiment observes:
This is Oregon’s Achilles heel.While the benchmarks have had a remarkable
impact on the private and nonprofit sectors—and many counties have imitated
the effort and developed their own benchmarks—state government has not
significantly reoriented its spending priorities to pursue the new goals.
(Osborne and Plastrik 1997a, 104)
The Floridian experiment—which the state legislature ultimately stopped
funding—may illustrate this point even more dramatically.
Canadian governments are based on the parliamentary rather than the
congressional model and consequently do not suffer the same difficulties in
budget making.The legislature is controlled by the executive and rarely upsets
expenditure proposals put forward by the executive. There is consequently no
need for a device to build consensus between the two branches on long-run
goals. In both provincial experiments with governmentwide performance
planning, the selection of social indicators and performance targets has been
conducted inside the executive branch.
At first glance, this approach seems to resolve two of the problems associated
with experiments by state governments. The risk that planning will be
detached from budgeting would seem to be reduced, since the same group of
actors within the executive is presumably making both planning and budgeting
decisions. Furthermore, an elected executive might seem to have a betterestablished
right to make decisions about the selection of social indicators
than an unelected board or commission. However, this approach may also
have its own significant weaknesses.
The first may be a difficulty in defining a narrow set of key performance
measures. The two state governments have gradually winnowed their list of
social indicators to a small number that are thought to be most central to
community well-being. The Oregon Benchmarks exercise began with more
than 250 measures; this list was eventually narrowed to 92 indicators, of
which 20 are given special attention as key benchmarks (Oregon Progress
Board 1997). Similarly, the GAP Commission began with a list of 270 benchmarks,
but its focus later narrowed to 57 critical benchmarks (Florida Commission
on Government Accountability to the People 1998). In both cases,
shortening the list has necessarily required some important judgments
about the relative importance of different policy areas