4.1 Recognition Architecture
Fig. 2 shows the overall architecture for EAR. The methods
were all implemented offline using MATLAB and C. Input
to the processing chain are the two EOG signals capturing
the horizontal and the vertical eye movement components.
In the first stage, these signals are processed to remove any
artifacts that might hamper eye movement analysis. In the
case of EOG signals, we apply algorithms for baseline drift
and noise removal. Only this initial processing depends on
the particular eye tracking technique used; all further stages
are completely independent of the underlying type of
eye movement data. In the next stage, three different eye
movement types are detected from the processed eye
movement data: saccades, fixations, and blinks. The
corresponding eye movement events returned by the
detection algorithms are the basis for extracting different
eye movement features using a sliding window. In the last
stage, a hybrid method selects the most relevant of these
features, and uses them for classification.