Theoretical statisticians have all too often accepted a
model (frequently one based on normal distribution
theory) and considered many statistical inferences using
that model without checking its validity. That is, we
have accepted the "dogma of the normal distribution"
(or something similar) and routinely performed the appropriate
statistical inference. Possibly a better way to
proceed would be to assume that an appropriate model is
to be found among a number of models, say Q1, Q2, * * *, Ok
which are suitably spaced throughout the spectrum of
possible models. Then use the data to select the model
which seems most appropriate and, with this model and
the same data, make the inferences desired. Hogg [7]
presented an example of such a procedure in defining a
robust method for estimating the center of a symmetric
distribution using several different models of symmetric
distributions of the continuous type. The models ranged
from a light-tailed one (like the uniform) to a heavytailed
one (like the Cauchy). Hogg has subsequently used
the term "adaptive" in referring to procedures which use
the data first to select a model and then to make an
inference based on the model chosen. Other adaptive
estimation procedures can be found in the literature (e.g.,
see Jaeckel [9]). Procedures of this type have proved
effective in other inference problems as well; for illustrations,
consider the preliminary testing schemes of Bancroft
[2] and others.