The conventional tests of hypotheses and confidence interval estimates of the parameters are based on the assumption that the estimates are normally distributed. Thus, the assumption of normality of the ε_i is critical for these purposes. However, normality is not required for least squares estimation. Even in the absence of normality, the least squares estimates are the best linear unbiased estimates (b.l.u.e.). They are best in the sense of having minimum variance among all linear unbiased estimators. If normality does hold, the maximum likelihood estimators are derived using the criterion of finding those values of the parameters that would have maximized the probability of obtaining the particular sample, called the likelihood function. Maximizing the likelihood function in equation 3.4 with respect to β = ( β_0 β_1 ••• β_p ) is equivalent to minimizing the sum of squares in the exponent, and hence the least squares estimates coincide with maximum likelihood estimates. The reader is referred to statistical theory texts such as Searle (1971), Graybill (1961), and Cram’er (1946) for further discussion of maximum likelihood estimation.