Ocular artifact in EEG has long been viewed as a problem for interpreting EEG data in basic and applied research. The removal of such artifacts has been an on-going effort over many decades. We have recently introduced a hybrid method combining second-order blind identification (SOBI) with DANS, a novel automatic identification method, to extract components containing specifically signals associated with horizontal and vertical saccadic eye movements (H and V Comps) and found that these components’ event-related potentials in response to saccadic eye movement are systematically modulated by movement directions and distances. Here in a case study, taking advantage of signals about gaze positions contained in the ocular artifact components, we introduced a novel concept of EEG-based virtual eye tracking (EVET) and presented its first prototype. Specifically, we determined (1) the amount of data needed for constructing models of horizontal gaze positions; (2) the asymptotic performance levels achieved with such models. We found that for the specific calibration task, 4 blocks of data (4 saccades per target position) are needed for reaching an asymptotic performance with a prediction accuracy of 0.44 and prediction reliability of 1.67. These results demonstrated that it is possible to track horizontal gaze position via EEG alone, ultimately enabling coregistration of eye movement and the neural signals.
bioRxiv Subject Collection: Neuroscience