A new method, eye-movement-contingent release of speech, was developed to study the use of sound codes during visual word recognition. This new method examines effects of spoken -rather then visual/orthographic- words on the concurrent recognition of visual words. Speech is to be presented while text are viewed and presentation of the speech signal is coordinated with the viewing of a preselected visual target word. During sentence reading, the reader's eye positions are continuously sampled, and a real time fixation detection algorithm is developed to present the auditory signal when a certain pre-specified eye position criterion (called boundary) is fulfilled. The auditory stimuli are presented from eye tracking data, that is, eye movement are contingent on the change of heard/seen stimuli, so that the location of the eye position during visual object perception is calculated relative to the auditory presentation of speech.
“SEE”-multi-modalities eye tracking system was developed to implement this new method. SEE system was mainly built on a fifth-generation Dual-Purkinje SRI eye tracker but also provides general API supports for use with other eye trackers (i.e. ICAN and SM systems). SEE software toolkit was installed on an IBM compatible personal computer, which was connected with the eye tracker through an Analogue-to-Digital converter.