Intellectual capital
• Human capital (the ratio, stability, training and experience of scientific staff)
• Structural capital (IT expenditure per employee and teleworking)
• Relational capital (foreign assignments, visits and employment in teaching)
Key processes
• Government funded research as a percentage of total income
• Percentage of new contracts with inter-institute cooperation
• Projects for foreign customers
Results
• Financial
• Intangible
• Research oriented (publications, prestigious grants)
• Economy oriented (patent applications and licence income)
• Society oriented (Internet site hits by external users)
FIGURE 9 Performance measurement in scientific organisations
Source: Adapted from Appendix A in K. H Leitner & C. Warden 2001, 'Managing and reporting knowledge- based resources and processes in research organisations: specifics, lessons learned and perspectives', Man-agement Accounting Research, 15 (1), pp. 47-8, with permission from Elsevier.
The systems of metrics described above were designed to plot the impact of par¬ticular types of change initiative. Indeed, these examples illustrate how a creative approach to measurement can yield metrics suitable for monitoring change in almost any situation. Over the past decade, managers have applied their imagination to this challenge, and business has witnessed a huge increase in the creative development of metrics to measure change. However, researchers today are now reporting a backlash against the measurement craze.
One example of a reaction against measurement comes from the Canadian province of Alberta.22 In 1993, a new government was elected with a mission to cut its deficit by reforming the public sector and making it more accountable. Across the board cuts were imposed. But in addition, a more searching exercise was introduced in which missions, goals and strategies were defined flowing to every unit of government administration. Performance measures were then cascaded down to monitor accomplishments. The Treasury officials driving this exercise insisted that all activities be measured, arguing ‘if you can’t measure it, it is not worth doing’. To begin with, managers supported the new goals and measures, believing that the data would inform reasoned justification for their aims and activities. They expected the new measurement system to support reasoned argument. Experience taught them otherwise. Measures were imposed from above that blocked entrepreneurialism and initiative and were used to justify arbitrary rational¬isation plans. This approach to measuring change became counterproductive because it made government administration less responsive to and thoughtful about real needs, and bred resistance amongst the middle managers whose expertise was marginalised. Despite the apparent ‘rationality’ of business planning, such ‘strategy-driven measures’ cannot be scientifically independent or positivist tools to measure change; they will always retain a ‘subjective’ character that is shaped by the political processes governing their design and implementation.
TQM analytical tools
The positivist (or scientific) techniques most widely used to measure and evaluate change activities are TQM analytical tools. (We gave an account of the origins and characteristics of TQM in chapter 4.) It is a common tool employed to implement organisational change.23 By 1995, 37 per cent of all Australian workplaces with 20 or more employees claimed to have TQM in place.24 Managers are attracted to TQM for many reasons. Its techniques are simple and can be applied by most employees, yet they also make rigorous use of hard statistical data. In addition, TQM offers practical reme¬dies to real quality problems.
TQM is usually described as a philosophy rather than a technique. It has a number of defining features, including the following:
• an aim to improve quality, which is defined by customer needs
• the use of systematic measurement and analysis of processes to reveal the origins of quality problems
• the involvement of employees in process improvement
• an ‘holistic’ approach to quality improvement, including all employees, all aspects of operations, and external parties (including customers and suppliers).
TQM requires small teams of workers (referred to by a variety of titles, including quality circles and productivity or process improvement groups) to make use of a number of tools or techniques that enable them to identify and measure faulty processes that cause abnormal variance in quality. These tools include:
• brainstorming
• statistical process control (SPC)
• flowcharts and workflow diagrams
• Pareto analysis (the 80:20 rule)
• cause-and-effect charts (Ishikawa diagrams).
We have no space here for a thorough description of TQM techniques, but a brief dis¬cussion of Pareto analysis and cause-and-effect charts can illustrate how the use of data and the analysis of causation characterise TQM. Pareto analysis is based on the Pareto principle that a few of the causes account for most of the effect (the 80:20 rule). A Pareto chart is a bar chart that represents this point graphically. Its use in TQM is to present data to prioritise the most significant causes of deviation from quality standards. Each column in the bar chart represents an individual cause of quality variance. At a glance, the user can identify from the highest columns the two or three factors that cause most quality problems, allowing those factors to be prioritised for analysis and action.
Cause-and-effect analysis is sometimes used following Pareto analysis to identify the causes of quality problems that have been prioritised and to point the way to specific improvements. Many teams develop Ishikawa or ‘fishbone’ diagrams to assist this pro¬cess. Typically, a team constructs such a diagram by first identifying a quality problem and then working backwards to isolate the major causes, minor causes, subcauses and subsets of causes. Quality teams often use brainstorming meetings to construct the dia¬gram. They may then gather data to verify it. As soon as the causes of a problem are tracked down in this way, the team usually has little difficulty finding a solution. After the solution is implemented, measuring improvements over time is a simple matter.
Can TQM techniques prove that change interventions deliver results? Yes, but usually in a fragmentary way. The typical claims from TQM analysis relate to specific process improvements at the team level. A stores team may record improvements in the percentage of stock returned, a production team may record improvements in its quality ‘non-conformance’ rates, and an administration team may record improvements in the percentage of non-scheduled cheque runs. Such data are valuable in many ways. First, a high validity in the link between cause (change action) and effect (performance gain) usually exists. Second, the team directly involved experiences motivational benefits.
Such fragmentary evidence is difficult to aggregate into a big picture. It is hard to tell, for example, what is the overall impact of TQM on the corporate bottom line. As a result, organisations sometimes lose track of the net effect of TQM and fail to recognise when excessive enthusiasm for micro gains conceals a macro cost. Hilmer and Don¬aldson tell the story of Florida Power and Light a US utility and TQM success story: ‘At one time during the TQM implementation, Florida Power and Light had about 1900 teams working with the guidance, direction and support of eighty-five full-time staffers on TQM projects. Literally thousands of processes, ranging from the replace¬ment of light bulbs to the paying of small accounts, were being methodically analysed by teams trained in statistical and process flow analysis. Form and bureaucracy began to drive out substance to the point where the eighty-five strong staff contingent had to be disbanded. Once line managers again started to work on the application of specific ideas to lower costs or speed up responses to customers, rather than on filling in forms, success was assured.’25
The problems in this example are not uncommon. In the absence of effective company-wide measurement and evaluation, TQM projects can run out of control, applying low-return processes and building up unnecessary overhead costs and counter¬productive formalities.