A statistician sees group features such as the mean and median as indicators of
stable properties of a variable system—properties that become evident only in the
aggregate. This stability can be thought of as the certainty in situations involving
uncertainty, the signal in noisy processes, or, the descriptor we prefer, central
tendency. Claiming that modern-day statisticians seldom use the term central
tendency, Moore (1990, p. 107) suggests that we abandon the phrase and speak
instead of measures of “center” or “location.” But we use the phrase here to
emphasize conceptual aspects of averages that we fear are often lost, especially to
students, when we talk about averages as if they were simply locations in
distributions.
By central tendency we refer to a stable value that (a) represents the signal in a
variable process and (b) is better approximated as the number of observations
grows.3 The obvious examples of statistics used as indicators of central tendency are
averages such as the mean and median. Processes with central tendencies have two
components: (a) a stable component, which is summarized by the mean, for
example; and (b) a variable component, such as the deviations of individual scores
around an average, which is often summarized by the standard deviation.
It is important to emphasize that measures of center are not the only way to
characterize stable components of noisy processes. Both the shape of a frequency
distribution and global measures of variability, for example, also stabilize as we
collect more data; they, too, give us information about the process. We might refer
to this more general class of characteristics as signatures of a process. We should
point out, however, that all the characteristics that we might look at, including the
shape and variability of a distribution, are close kin to averages. That is, when we
look at the shape of a particular distribution, we do not ordinarily want to know
precisely how the frequency of values changes over the range of the variable.
Rather, we tame the distribution’s “bumpiness.” We might do this informally by
visualizing a smoother underlying curve or formally by computing a best-fit curve.
In either case, we attempt to see what remains when we smooth out the variability.
In a similar manner, when we employ measures such as the standard deviation or
interquartile range, we strive to characterize the average spread of the data in the
sample.
A statistician sees group features such as the mean and median as indicators of
stable properties of a variable system—properties that become evident only in the
aggregate. This stability can be thought of as the certainty in situations involving
uncertainty, the signal in noisy processes, or, the descriptor we prefer, central
tendency. Claiming that modern-day statisticians seldom use the term central
tendency, Moore (1990, p. 107) suggests that we abandon the phrase and speak
instead of measures of “center” or “location.” But we use the phrase here to
emphasize conceptual aspects of averages that we fear are often lost, especially to
students, when we talk about averages as if they were simply locations in
distributions.
By central tendency we refer to a stable value that (a) represents the signal in a
variable process and (b) is better approximated as the number of observations
grows.3 The obvious examples of statistics used as indicators of central tendency are
averages such as the mean and median. Processes with central tendencies have two
components: (a) a stable component, which is summarized by the mean, for
example; and (b) a variable component, such as the deviations of individual scores
around an average, which is often summarized by the standard deviation.
It is important to emphasize that measures of center are not the only way to
characterize stable components of noisy processes. Both the shape of a frequency
distribution and global measures of variability, for example, also stabilize as we
collect more data; they, too, give us information about the process. We might refer
to this more general class of characteristics as signatures of a process. We should
point out, however, that all the characteristics that we might look at, including the
shape and variability of a distribution, are close kin to averages. That is, when we
look at the shape of a particular distribution, we do not ordinarily want to know
precisely how the frequency of values changes over the range of the variable.
Rather, we tame the distribution’s “bumpiness.” We might do this informally by
visualizing a smoother underlying curve or formally by computing a best-fit curve.
In either case, we attempt to see what remains when we smooth out the variability.
In a similar manner, when we employ measures such as the standard deviation or
interquartile range, we strive to characterize the average spread of the data in the
sample.
การแปล กรุณารอสักครู่..
