🤖 AI Summary
To address low accuracy, high error rates, and severe ocular fatigue in gaze-based mark menus, this paper proposes Visual Anchor Grid–based Gaze Mark (VAG-Mark). VAG-Mark deploys a structured visual anchor grid on the interface to guide users’ gaze trajectories and enable target-assisted gaze gesture recognition, thereby supporting rapid, high-precision selection in multi-level menus. Compared with conventional gaze menus, VAG-Mark significantly reduces both cognitive and motor load: experiments show expert users achieve average selection times of 1.3–1.6 seconds, error rates drop to ~1%, and overall error incidence decreases by approximately fivefold; all participants subjectively preferred VAG-Mark. This work introduces structured visual anchors into the gaze interaction closed loop for the first time, establishing a novel paradigm for highly usable gaze interfaces.
📝 Abstract
We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 x 4 x 4 and 6 x 6 x 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.