An eye gaze model for seismic interpretation support
Abstract
Designing systems to offer support to experts during cognitive intensive tasks at the right time is still a challenging endeavor, despite years of research progress in the area. This paper proposes a gaze model based on eye tracking empirical data to identify when a system should proactively interact with the expert during visual inspection tasks. The gaze model derives from the analyses of a user study where 11 seismic interpreters were asked to perform the visual inspection task of seismic images from known and unknown basins. The eye tracking fixation patterns were triangulated with pupil dilations and thinking-aloud data. Results show that cumulative saccadic distances allow identifying when additional information could be offered to support seismic interpreters, changing the visual search behavior from exploratory to goal-directed.