Common Pitfalls in Statistical Thinking Applying Common Sense A primary concern in deploying best practices is that the organization may not always follow its own docu-mented processes. Staff may apply common sense to ignore real or perceived flaws in the documented process. The authors do not wish to discourage the use of common sense when following the documented pro-cess would not add value, understanding, or insight.It does, however, raise an issue about compliance to the documented processes and the effectiveness of the process implementation. This is a sensitive point, but
requiring that staff follow an obviously inappropriate process would damage the credibility of the authors
of the framework being used, the assessment or audit team, and those defining the process in question. An
assessment/audit team cannot ignore such a problem; it must report both the noncompliance and the issues with the documented process. For example, some organizations set upper and lower natural process limits at the 5th and 95th per-
centiles of their data rather than using control charts. Analyzing the bottom 5 percent of the data may be pointless, particularly when there is a natural bound-ary for the data, such as zero. For a skewed distribution where a relatively high percentage of the data—more than 5 percent—may be at zero, analyzing the points in the bottom 5th percentile appears unreasonable. Observations where the data have a value of zero can be expected to occur occasionally in this process, and ana-lyzing the bottom 5 percent provides little or no insight.Analyzing how often—and whether—zero events should occur may be a valid question, but exploring the “whys” of the common cause system is a process improvement question, not a process control question.Ignoring data below the 5th percentile limit because it does not make sense to treat those observations as atypical is an example of doing the right thing, even though the guideline, which implements a poor statistical technique, would require doing something else . . . and where implementing a superior statistical technique, for example, an XmR chart, would lead to doing the right thing for the right reason. Blindly doing a causal analysis where common sense indicates that no value would be obtained would be counterproductive. Sometimes the meaning of the 95th percentile limit is misinterpreted. It does not mean to pick the five largest data values and do a causal analysis. It means
pick the largest 5 percent. If one has 20 data points,it means the largest value; for 1,000 data points, it means the largest 50 values.The prescription for simple arithmetic errors is training and education, with perhaps better tool sup-port. The prescription for failing to follow a nonsensical process is more challenging, since the solution involves defining processes—and analytic techniques—that are appropriate for the kind of work being done. Defining “good” measures and analytic techniques is an exten-sion of the implied need for “good processes” at the lower levels of models such as the eSCM-SP and CMMI. While how-to “goodness” is deliberately out of scope for what-to-do frameworks, a process is what one does,
not what he or she documents. Processes that are not used fail to build on the lower-level capabilities that are a prerequisite for the higher-level statistical think-ing in these models.