A Resource-Rational Principle for Modeling Visual Attention Control

📅 2026-03-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing models of visual attention are often task-specific, lack a unifying theoretical foundation, and suffer from limited interpretability. This work proposes a unified computational framework grounded in resource-rationality, formalizing visual attention as a sequential decision-making process under constraints of perception, memory, and time. By casting reading and multitasking scenarios as bounded optimal control problems, the model leverages a core Partially Observable Markov Decision Process (POMDP) architecture. In simulated environments, realistic eye movement patterns emerge naturally—without hand-crafted rules or purely data-driven learning. The framework successfully reproduces established empirical phenomena, reveals an inherent trade-off between comprehension and safety, and generates novel, testable predictions under conditions of time pressure and dynamic interface changes.

Technology Category

Application Category

📝 Abstract
Understanding how people allocate visual attention is central to Human-Computer Interaction (HCI), yet existing computational models of attention are often either descriptive, task-specific, or difficult to interpret. My dissertation develops a resource-rational, simulation-based framework for modeling visual attention as a sequential decision-making process under perceptual, memory, and time constraints. I formalize visual tasks, such as reading and multitasking, as bounded-optimal control problems using Partially Observable Markov Decision Processes, enabling eye-movement behaviors such as fixation and attention switching to emerge from rational adaptation rather than being hand-coded or purely data-driven. These models are instantiated in simulation environments spanning traditional text reading and reading-while-walking with smart glasses, where they reproduce classic empirical effects, explain observed trade-offs between comprehension and safety, and generate novel predictions under time pressure and interface variation. Collectively, this work contributes a unified computational account of visual attention, offering new tools for theory-driven and resource-efficient HCI design.
Problem

Research questions and friction points this paper is trying to address.

visual attention
computational modeling
human-computer interaction
attention allocation
cognitive constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

resource-rational modeling
visual attention
sequential decision-making
POMDP
simulation-based framework
🔎 Similar Papers
No similar papers found.