Whereas evaluators consider that performance can only be grasped through a detailed examination of program specifics and by taking into account qualitative dimensions of performance, auditors focus on the measurement of quantifiable inputs and outputs, reflecting a background in financial accounting (see also Power, 1997a).18 The Office, for example, was sceptical of nonquantitative measures and customer satisfaction surveys.
Program evaluators in Alberta experienced turbulence following government’s adoption of a performance measurement agenda (Bradley, 2001). A significant number of the people who were involved at the beginning of the 1990s in program evaluation had to reorient their career into business planning and performance evaluation or leave public service. One evaluator described the evaluation community as navigating ‘‘blindly’’, especially since evaluators could not rely on a large-scale network of resources like that sustained by accountants, dedicated, among other things, to the construction of expert knowledge in performance measurement.
19 Also, the reports of program evaluators were then not issued publicly, which made it more difficult for evaluators to be informed of experiments carried out by their peers. In contrast, the effectiveness of state auditors’ laboratories is crucially connected with auditors’ ability to publish and widely disseminate the successful accomplishment of their performance measurement and value-for-money experiments, both federally (where the Federal Auditor General has a tradition of publishing extensive value-for-money reports) and in each of the Canadian Provinces. Politicians and media widely use their reports, and in so doing reinforce the claims of auditors to expertise in these fields.