In addition to imaging technologies, the major advances in spectral analysis in recent years have been in statistical methods. Early analyses used multiple linear regression of raw, first difference, or second difference spectra (Hruschka, 1987). Later methods used various forms of data reduction such as principal component or partial least squares coupled with multiple regression. Present investigations focus on artificial neural networks (Chen et al., 1995; Song et al., 1995) and wavelets for data reduction. Each approach has advantages and disadvantages. Rapid scanning spectrophotometers
are available and permit use of all or large parts of the spectrum. Optical-filter instruments or multispectral cameras require wavelength selection, rather than full-spectrum scanning. Limitingthe number of wavelengths required reduces measurement time, even with acousto-optical or liquid crystal tunable filters, enabling application of optical measurement on-line for sorting operations at commercially acceptable speeds. Of course, limiting the number of wavelengths also reduces computational time; but it may also reduce the chemometric content of the data. Data processing of hyperspectral
images is particularly complex, requiring the development of hybrid analyses using both spectroscopic and imaging concepts. New statistical methods are being developed to utilize hyperspectral images efficiently for quality assessment.
Regardless of the statistical methods, it is critical that the underlying relationship between the measurement and the quality attribute be valid and robust. There must be a fundamental relationship between the wavelength selection and the chemical(s) being sensed or the measurement will ultimately fail.