GazeCopilot: Evaluating Novel Gaze-Informed Prompting for AI-Supported Code Comprehension and Readability

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI coding assistants (e.g., GitHub Copilot) rely on static context, failing to adapt to developers’ real-time cognitive states—leading to overly generic prompts and excessive code refactoring. This work introduces the first dynamic prompt optimization method grounded in real-time eye-tracking feedback: it integrates gaze metrics—including fixation trajectories and pupil dilation—to construct an attention-aware closed-loop interaction system that delivers personalized code suggestions and demand-driven refactoring. Our core contribution lies in incorporating fine-grained cognitive state modeling into prompt engineering, thereby avoiding one-size-fits-all interventions. Empirical evaluation demonstrates that our approach significantly improves code comprehension accuracy (+23.6%), reduces comprehension time (−31.4%), and enhances subjective readability ratings (p < 0.01), outperforming both standard Copilot and predefined-strategy baselines.

Technology Category

Application Category

📝 Abstract
AI-powered coding assistants, like GitHub Copilot, are increasingly used to boost developers'productivity. However, their output quality hinges on the contextual richness of the prompts. Meanwhile, gaze behaviour carries rich cognitive information, providing insights into how developers process code. We leverage this in Real-time GazeCopilot, a novel approach that refines prompts using real-time gaze data to improve code comprehension and readability by integrating gaze metrics, like fixation patterns and pupil dilation, into prompts to adapt suggestions to developers'cognitive states. In a controlled lab study with 25 developers, we evaluated Real-time GazeCopilot against two baselines: Standard Copilot, which relies on text prompts provided by developers, and Pre-set GazeCopilot, which uses a hard-coded prompt that assumes developers'gaze metrics indicate they are struggling with all aspects of the code, allowing us to assess the impact of leveraging the developer's personal real-time gaze data. Our results show that prompts dynamically generated using developers'real-time gaze data significantly improve code comprehension accuracy, reduce comprehension time, and improve perceived readability compared to Standard Copilot. Our Real-time GazeCopilot approach selectively refactors only code aspects where gaze data indicate difficulty, outperforming the overgeneralized refactoring done by Pre-set GazeCopilot by avoiding revising code the developer already understands.
Problem

Research questions and friction points this paper is trying to address.

Enhancing code comprehension accuracy through real-time gaze data integration
Reducing code comprehension time with dynamic gaze-informed prompt refinement
Improving perceived readability by selectively refactoring problematic code segments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates real-time gaze data into prompts
Uses fixation patterns and pupil dilation metrics
Dynamically refactors code based on cognitive states
🔎 Similar Papers
No similar papers found.
Y
Yasmine Elfares
University of Glasgow, United Kingdom
G
Gül Çalikli
University of Glasgow, United Kingdom
Mohamed Khamis
Mohamed Khamis
Professor of Cybersecurity and HCI, University of Glasgow
Human Computer InteractionUsable Security and PrivacyEye TrackingVirtual RealityXR