In physical sciences and engineering the metaphysical paradigm has been largely superseded by the representational one, which, in turn, is increasingly complemented by emphasis on the role of models and, therefore, by the relativistic paradigm, moderated by the pragmatic instances analyzed above. The adoption of a pragmatic perspective in PM research and practice requires a critical re-conceptualization. First of all, the adoption of a model-based view, as opposed to a truth-based view, has substantial implications on the measurement process and on the interpretation of its results. From a model-based point of view, measurement is regarded as a knowledge-based process, rather than a purely empirical determination. The object whose property is measured is assumed to exist in the empirical world, but it is acknowledged that the data collected about that object results from an interpretation process, i.e., an explicit or implicit model, which belongs to the symbolic/informational realm. As a consequence, the measurement procedure must be defined, and the system under measurement designed and set up by considering the context and the goals for which the measurement itself is being undertaken. Furthermore, particular care has to be exerted when utilizing any data or information in a context other than that in which it was meant, as this has substantial implications on the drivers, purposes and uses of PM (Behn, 2003 and Hatry, 1999). This is particularly significant in benchmarking exercises or compilation of league tables in both private and public sector contexts (Ammons, 1999 and Goldstein and Spiegelhalter, 1996). Equally, from a model-based point of view, it would be nonsensical to state that a performance indicator is either ‘good’ or ‘bad’ in absolute terms. Rather, on the bases of its goals and other relevant factors (e.g., cost, quality), an indicator could be deemed to be adequate or inadequate-to-purpose. For example, an indicator that considers the average time an organization spends to produce a quote might be appropriate to monitor the organization's responsiveness to customers. This does not equate to saying that the average difference between date of verbal confirmation of receipt of quote by customers and the date of first contact by customers is the ‘right measure’ for responsiveness to customers. In fact, the calculation of variance in performance could also be informative. However, a simple average might be appropriate for monitoring purposes, whereas richer information would be necessary to support process improvement.
Second, the aims of accuracy, repeatability and objectivity in measurement emphasize the necessity to define the properties being measured, and to question and consider the influence of the measurer on the measurement results. Although most organizations utilize measures of ‘absenteeism’, for example, the ways in which absenteeism is effectively defined and measured can substantially vary (Hausknecht et al., 2008). Moreover, information on absenteeism would be influenced by the person undertaking the measurement, i.e., differences in responses are likely to emerge if measurement is performed either by management or by an independent agency. Reasons for measurement (e.g., understanding reasons for absence, launching a new program to motivate employees, or simply reducing absenteeism) would also influence measurement results. Therefore, operational definitions could be adopted to enhance consistency and trust in the indicators introduced (Deming, 1986 and Goldratt and Cox, 2004). However, the relevance and unavoidability of interpretive models in the measurement process also highlight the importance of surfacing mental models in the design and use of PM systems. If mental models are considered as “deeply ingrained assumptions, generalizations, or even pictures or images that influence how we understand the world and how we take action” (Senge, 2006, p. 8), it is clear how PM could be both influenced by, and used as a way to challenge, mental models. This is particularly relevant, since long-term success often depends on the process through which management teams modify and improve the shared mental models of their organizations, their markets, and their competitors (Argyris, 1977, De Geus, 1999 and Senge, 2006).
The consideration of the epistemic role of the measurer also has significant consequences in relation to organizational learning and review of PM systems. As Bohm (1980, p. 23) argued, “the attempt to suppose that measures exist prior to man and independently of him leads […] to the ‘objectification’ of man's insight, so that it becomes rigidified and unable to change, eventually bringing about fragmentation and general confusion”. If confusion exists between reality and the measurement results obtained from it, in such a way that measurement is seen as capturing the essence of objects, PM systems will inevitably be static. Research has demonstrated how the use of excessively rigid PM systems could lead to organizational paralysis, or ‘ossification’ (Smith, 1995b). Indeed, after investing substantial resources to design and implement large sets of performance indicators, organizations often decide not to modify them, also because they are perceived as ‘perfect representations’ of performance. On the contrary, only by analyzing the data gathered through the system and, in particular, reformulating the PM system itself, can organizations improve their performance and generate single and double loop learning. Indeed, “double-loop changes cannot occur without unfreezing the models of organizational structures and processes now in good currency” (Argyris, 1992, p. 11). Therefore, reviews of PM systems must happen through an in-depth comparison between what is measured of the activities performed, and which activities really occur, since measurement is related to the knowledge about the state of an object, rather than the knowledge about the object ‘in itself’. Consequently, reviews could be used to challenge not only the current PM system, but also the organization's strategy and its implementation (on the roles of PM systems in strategy formulation and implementation see, for example, Chenhall, 2005, Gimbert et al., 2010 and Simons, 1990).
Fourth, replacing the criterion of truth with a criterion of adequacy implies that cost and quality of measurement should be considered relevant components of the measurement process, and, therefore, assessed both before and after measurement takes place. As a consequence, the introduction of PM systems must be considered as an investment from which to expect a certain return, rather than either an inherently fruitful use of resources (Kaplan and Norton, 1992) or a ‘necessary evil’ (Brignall and Modell, 2000). Both error and uncertainty should be considered in relation to the empirical ability of obtaining appropriate information on the intended property: hence, specificity and trust also become essential features of PM, as performance could be measured with great accuracy, but precision can be misleading, as indicators can be precisely wrong (Mari, 2007).
Fifth, the focus of measurement has to shift from what is measurable, which is a prevalently epistemological act, to the nature of the objects that we want to measure, i.e., the actual organizational processes and activities being performed. Moreover, indicators should not be considered as exact pictures of reality or as unveiling presumed truths. On the contrary, they ought to be used as ways to gather information about organizational performance that is as adequate as possible. Several authors have suggested ways to tackle the inherent incompleteness of PM systems (Chapman, 1997, Lillis, 2002 and Wouters and Wilderom, 2008). Although more appropriate designs and implementations could certainly contribute to this aim, it is also important to fully acknowledge the limitations of measurement. In particular, when measuring performance the last word has to go to the object being measured, and not to the subject; as subjects we have to continuously confront ourselves with the object and not vice versa (Ferraris, 2005).
Finally, as the etymology of ‘measurement’ suggests, PM systems should be proportionate, i.e., they should consist of an adequate number of indicators, which can inform decision-making processes, rather than aim at providing ‘true representations’ of performance. While great advancements in the theory and practice of performance measurement are certainly possible, they would have to go beyond ‘more and more precise measurement’.
In physical sciences and engineering the metaphysical paradigm has been largely superseded by the representational one, which, in turn, is increasingly complemented by emphasis on the role of models and, therefore, by the relativistic paradigm, moderated by the pragmatic instances analyzed above. The adoption of a pragmatic perspective in PM research and practice requires a critical re-conceptualization. First of all, the adoption of a model-based view, as opposed to a truth-based view, has substantial implications on the measurement process and on the interpretation of its results. From a model-based point of view, measurement is regarded as a knowledge-based process, rather than a purely empirical determination. The object whose property is measured is assumed to exist in the empirical world, but it is acknowledged that the data collected about that object results from an interpretation process, i.e., an explicit or implicit model, which belongs to the symbolic/informational realm. As a consequence, the measurement procedure must be defined, and the system under measurement designed and set up by considering the context and the goals for which the measurement itself is being undertaken. Furthermore, particular care has to be exerted when utilizing any data or information in a context other than that in which it was meant, as this has substantial implications on the drivers, purposes and uses of PM (Behn, 2003 and Hatry, 1999). This is particularly significant in benchmarking exercises or compilation of league tables in both private and public sector contexts (Ammons, 1999 and Goldstein and Spiegelhalter, 1996). Equally, from a model-based point of view, it would be nonsensical to state that a performance indicator is either ‘good’ or ‘bad’ in absolute terms. Rather, on the bases of its goals and other relevant factors (e.g., cost, quality), an indicator could be deemed to be adequate or inadequate-to-purpose. For example, an indicator that considers the average time an organization spends to produce a quote might be appropriate to monitor the organization's responsiveness to customers. This does not equate to saying that the average difference between date of verbal confirmation of receipt of quote by customers and the date of first contact by customers is the ‘right measure’ for responsiveness to customers. In fact, the calculation of variance in performance could also be informative. However, a simple average might be appropriate for monitoring purposes, whereas richer information would be necessary to support process improvement.
Second, the aims of accuracy, repeatability and objectivity in measurement emphasize the necessity to define the properties being measured, and to question and consider the influence of the measurer on the measurement results. Although most organizations utilize measures of ‘absenteeism’, for example, the ways in which absenteeism is effectively defined and measured can substantially vary (Hausknecht et al., 2008). Moreover, information on absenteeism would be influenced by the person undertaking the measurement, i.e., differences in responses are likely to emerge if measurement is performed either by management or by an independent agency. Reasons for measurement (e.g., understanding reasons for absence, launching a new program to motivate employees, or simply reducing absenteeism) would also influence measurement results. Therefore, operational definitions could be adopted to enhance consistency and trust in the indicators introduced (Deming, 1986 and Goldratt and Cox, 2004). However, the relevance and unavoidability of interpretive models in the measurement process also highlight the importance of surfacing mental models in the design and use of PM systems. If mental models are considered as “deeply ingrained assumptions, generalizations, or even pictures or images that influence how we understand the world and how we take action” (Senge, 2006, p. 8), it is clear how PM could be both influenced by, and used as a way to challenge, mental models. This is particularly relevant, since long-term success often depends on the process through which management teams modify and improve the shared mental models of their organizations, their markets, and their competitors (Argyris, 1977, De Geus, 1999 and Senge, 2006).
The consideration of the epistemic role of the measurer also has significant consequences in relation to organizational learning and review of PM systems. As Bohm (1980, p. 23) argued, “the attempt to suppose that measures exist prior to man and independently of him leads […] to the ‘objectification’ of man's insight, so that it becomes rigidified and unable to change, eventually bringing about fragmentation and general confusion”. If confusion exists between reality and the measurement results obtained from it, in such a way that measurement is seen as capturing the essence of objects, PM systems will inevitably be static. Research has demonstrated how the use of excessively rigid PM systems could lead to organizational paralysis, or ‘ossification’ (Smith, 1995b). Indeed, after investing substantial resources to design and implement large sets of performance indicators, organizations often decide not to modify them, also because they are perceived as ‘perfect representations’ of performance. On the contrary, only by analyzing the data gathered through the system and, in particular, reformulating the PM system itself, can organizations improve their performance and generate single and double loop learning. Indeed, “double-loop changes cannot occur without unfreezing the models of organizational structures and processes now in good currency” (Argyris, 1992, p. 11). Therefore, reviews of PM systems must happen through an in-depth comparison between what is measured of the activities performed, and which activities really occur, since measurement is related to the knowledge about the state of an object, rather than the knowledge about the object ‘in itself’. Consequently, reviews could be used to challenge not only the current PM system, but also the organization's strategy and its implementation (on the roles of PM systems in strategy formulation and implementation see, for example, Chenhall, 2005, Gimbert et al., 2010 and Simons, 1990).
Fourth, replacing the criterion of truth with a criterion of adequacy implies that cost and quality of measurement should be considered relevant components of the measurement process, and, therefore, assessed both before and after measurement takes place. As a consequence, the introduction of PM systems must be considered as an investment from which to expect a certain return, rather than either an inherently fruitful use of resources (Kaplan and Norton, 1992) or a ‘necessary evil’ (Brignall and Modell, 2000). Both error and uncertainty should be considered in relation to the empirical ability of obtaining appropriate information on the intended property: hence, specificity and trust also become essential features of PM, as performance could be measured with great accuracy, but precision can be misleading, as indicators can be precisely wrong (Mari, 2007).
Fifth, the focus of measurement has to shift from what is measurable, which is a prevalently epistemological act, to the nature of the objects that we want to measure, i.e., the actual organizational processes and activities being performed. Moreover, indicators should not be considered as exact pictures of reality or as unveiling presumed truths. On the contrary, they ought to be used as ways to gather information about organizational performance that is as adequate as possible. Several authors have suggested ways to tackle the inherent incompleteness of PM systems (Chapman, 1997, Lillis, 2002 and Wouters and Wilderom, 2008). Although more appropriate designs and implementations could certainly contribute to this aim, it is also important to fully acknowledge the limitations of measurement. In particular, when measuring performance the last word has to go to the object being measured, and not to the subject; as subjects we have to continuously confront ourselves with the object and not vice versa (Ferraris, 2005).
Finally, as the etymology of ‘measurement’ suggests, PM systems should be proportionate, i.e., they should consist of an adequate number of indicators, which can inform decision-making processes, rather than aim at providing ‘true representations’ of performance. While great advancements in the theory and practice of performance measurement are certainly possible, they would have to go beyond ‘more and more precise measurement’.
การแปล กรุณารอสักครู่..
In physical sciences and engineering the metaphysical paradigm has been largely superseded by the representational one, which, in turn, is increasingly complemented by emphasis on the role of models and, therefore, by the relativistic paradigm, moderated by the pragmatic instances analyzed above. The adoption of a pragmatic perspective in PM research and practice requires a critical re-conceptualization. First of all, the adoption of a model-based view, as opposed to a truth-based view, has substantial implications on the measurement process and on the interpretation of its results. From a model-based point of view, measurement is regarded as a knowledge-based process, rather than a purely empirical determination. The object whose property is measured is assumed to exist in the empirical world, but it is acknowledged that the data collected about that object results from an interpretation process, i.e., an explicit or implicit model, which belongs to the symbolic/informational realm. As a consequence, the measurement procedure must be defined, and the system under measurement designed and set up by considering the context and the goals for which the measurement itself is being undertaken. Furthermore, particular care has to be exerted when utilizing any data or information in a context other than that in which it was meant, as this has substantial implications on the drivers, purposes and uses of PM (Behn, 2003 and Hatry, 1999). This is particularly significant in benchmarking exercises or compilation of league tables in both private and public sector contexts (Ammons, 1999 and Goldstein and Spiegelhalter, 1996). Equally, from a model-based point of view, it would be nonsensical to state that a performance indicator is either ‘good’ or ‘bad’ in absolute terms. Rather, on the bases of its goals and other relevant factors (e.g., cost, quality), an indicator could be deemed to be adequate or inadequate-to-purpose. For example, an indicator that considers the average time an organization spends to produce a quote might be appropriate to monitor the organization's responsiveness to customers. This does not equate to saying that the average difference between date of verbal confirmation of receipt of quote by customers and the date of first contact by customers is the ‘right measure’ for responsiveness to customers. In fact, the calculation of variance in performance could also be informative. However, a simple average might be appropriate for monitoring purposes, whereas richer information would be necessary to support process improvement.
Second, the aims of accuracy, repeatability and objectivity in measurement emphasize the necessity to define the properties being measured, and to question and consider the influence of the measurer on the measurement results. Although most organizations utilize measures of ‘absenteeism’, for example, the ways in which absenteeism is effectively defined and measured can substantially vary (Hausknecht et al., 2008). Moreover, information on absenteeism would be influenced by the person undertaking the measurement, i.e., differences in responses are likely to emerge if measurement is performed either by management or by an independent agency. Reasons for measurement (e.g., understanding reasons for absence, launching a new program to motivate employees, or simply reducing absenteeism) would also influence measurement results. Therefore, operational definitions could be adopted to enhance consistency and trust in the indicators introduced (Deming, 1986 and Goldratt and Cox, 2004). However, the relevance and unavoidability of interpretive models in the measurement process also highlight the importance of surfacing mental models in the design and use of PM systems. If mental models are considered as “deeply ingrained assumptions, generalizations, or even pictures or images that influence how we understand the world and how we take action” (Senge, 2006, p. 8), it is clear how PM could be both influenced by, and used as a way to challenge, mental models. This is particularly relevant, since long-term success often depends on the process through which management teams modify and improve the shared mental models of their organizations, their markets, and their competitors (Argyris, 1977, De Geus, 1999 and Senge, 2006).
The consideration of the epistemic role of the measurer also has significant consequences in relation to organizational learning and review of PM systems. As Bohm (1980, p. 23) argued, “the attempt to suppose that measures exist prior to man and independently of him leads […] to the ‘objectification’ of man's insight, so that it becomes rigidified and unable to change, eventually bringing about fragmentation and general confusion”. If confusion exists between reality and the measurement results obtained from it, in such a way that measurement is seen as capturing the essence of objects, PM systems will inevitably be static. Research has demonstrated how the use of excessively rigid PM systems could lead to organizational paralysis, or ‘ossification’ (Smith, 1995b). Indeed, after investing substantial resources to design and implement large sets of performance indicators, organizations often decide not to modify them, also because they are perceived as ‘perfect representations’ of performance. On the contrary, only by analyzing the data gathered through the system and, in particular, reformulating the PM system itself, can organizations improve their performance and generate single and double loop learning. Indeed, “double-loop changes cannot occur without unfreezing the models of organizational structures and processes now in good currency” (Argyris, 1992, p. 11). Therefore, reviews of PM systems must happen through an in-depth comparison between what is measured of the activities performed, and which activities really occur, since measurement is related to the knowledge about the state of an object, rather than the knowledge about the object ‘in itself’. Consequently, reviews could be used to challenge not only the current PM system, but also the organization's strategy and its implementation (on the roles of PM systems in strategy formulation and implementation see, for example, Chenhall, 2005, Gimbert et al., 2010 and Simons, 1990).
Fourth, replacing the criterion of truth with a criterion of adequacy implies that cost and quality of measurement should be considered relevant components of the measurement process, and, therefore, assessed both before and after measurement takes place. As a consequence, the introduction of PM systems must be considered as an investment from which to expect a certain return, rather than either an inherently fruitful use of resources (Kaplan and Norton, 1992) or a ‘necessary evil’ (Brignall and Modell, 2000). Both error and uncertainty should be considered in relation to the empirical ability of obtaining appropriate information on the intended property: hence, specificity and trust also become essential features of PM, as performance could be measured with great accuracy, but precision can be misleading, as indicators can be precisely wrong (Mari, 2007).
Fifth, the focus of measurement has to shift from what is measurable, which is a prevalently epistemological act, to the nature of the objects that we want to measure, i.e., the actual organizational processes and activities being performed. Moreover, indicators should not be considered as exact pictures of reality or as unveiling presumed truths. On the contrary, they ought to be used as ways to gather information about organizational performance that is as adequate as possible. Several authors have suggested ways to tackle the inherent incompleteness of PM systems (Chapman, 1997, Lillis, 2002 and Wouters and Wilderom, 2008). Although more appropriate designs and implementations could certainly contribute to this aim, it is also important to fully acknowledge the limitations of measurement. In particular, when measuring performance the last word has to go to the object being measured, and not to the subject; as subjects we have to continuously confront ourselves with the object and not vice versa (Ferraris, 2005).
Finally, as the etymology of ‘measurement’ suggests, PM systems should be proportionate, i.e., they should consist of an adequate number of indicators, which can inform decision-making processes, rather than aim at providing ‘true representations’ of performance. While great advancements in the theory and practice of performance measurement are certainly possible, they would have to go beyond ‘more and more precise measurement’.
การแปล กรุณารอสักครู่..