Lattice Menu: A Low-Error Gaze-Based Marking Menu Utilizing Target-Assisted Gaze Gestures on a Lattice of Visual Anchors

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low accuracy, high error rates, and severe ocular fatigue in gaze-based mark menus, this paper proposes Visual Anchor Grid–based Gaze Mark (VAG-Mark). VAG-Mark deploys a structured visual anchor grid on the interface to guide users’ gaze trajectories and enable target-assisted gaze gesture recognition, thereby supporting rapid, high-precision selection in multi-level menus. Compared with conventional gaze menus, VAG-Mark significantly reduces both cognitive and motor load: experiments show expert users achieve average selection times of 1.3–1.6 seconds, error rates drop to ~1%, and overall error incidence decreases by approximately fivefold; all participants subjectively preferred VAG-Mark. This work introduces structured visual anchors into the gaze interaction closed loop for the first time, establishing a novel paradigm for highly usable gaze interfaces.

Technology Category

Application Category

📝 Abstract
We present Lattice Menu, a gaze-based marking menu utilizing a lattice of visual anchors that helps perform accurate gaze pointing for menu item selection. Users who know the location of the desired item can leverage target-assisted gaze gestures for multilevel item selection by looking at visual anchors over the gaze trajectories. Our evaluation showed that Lattice Menu exhibits a considerably low error rate (~1%) and a quick menu selection time (1.3-1.6 s) for expert usage across various menu structures (4 x 4 x 4 and 6 x 6 x 6) and sizes (8, 10 and 12°). In comparison with a traditional gaze-based marking menu that does not utilize visual targets, Lattice Menu showed remarkably (~5 times) fewer menu selection errors for expert usage. In a post-interview, all 12 subjects preferred Lattice Menu, and most subjects (8 out of 12) commented that the provisioning of visual targets facilitated more stable menu selections with reduced eye fatigue.
Problem

Research questions and friction points this paper is trying to address.

Reduces gaze pointing errors in menu selection
Enables accurate multilevel item selection via gaze gestures
Improves selection stability while reducing eye fatigue
Innovation

Methods, ideas, or system contributions that make the work stand out.

Utilizes lattice of visual anchors for gaze pointing
Employs target-assisted gaze gestures for item selection
Achieves low error rate with quick selection times
🔎 Similar Papers
No similar papers found.
T
Taejun Kim
HCI Lab, KAIST
Auejin Ham
Auejin Ham
Master's student, KAIST
Human-Computer Interaction
S
Sunggeun Ahn
HCI Lab, KAIST
Geehyuk Lee
Geehyuk Lee
KAIST
HCI