🤖 AI Summary
This study addresses the cognitive mismatch between developers’ mental models and the unpredictable behavior of AI code completion tools. Through a systematic mental model elicitation process—including focus groups with 56 participants and heuristic human factors analysis—we identify the core sources of human–AI cognitive misalignment for the first time and derive 12 actionable design principles. We propose ATHENA, a novel dynamic adaptive architecture that tailors code suggestion strategies in real time based on individual coding habits and contextual cues. A prototype evaluation demonstrates statistically significant improvements: +38% increase in developer trust, +27% gain in task completion efficiency, and enhanced subjective satisfaction (p < 0.01). Our work establishes both a theoretical foundation and an empirically grounded design paradigm for human-centered AI programming tools.
📝 Abstract
Integrated Development Environments increasingly implement AI-powered code completion tools (CCTs), which promise to enhance developer efficiency, accuracy, and productivity. However, interaction challenges with CCTs persist, mainly due to mismatches between developers' mental models and the unpredictable behavior of AI-generated suggestions. This is an aspect underexplored in the literature. To address this gap, we conducted an elicitation study with 56 developers using focus groups, to elicit their mental models when interacting with CCTs. The study findings provide actionable insights for designing human-centered CCTs that align with user expectations, enhance satisfaction and productivity, and foster trust in AI-powered development tools. To demonstrate the feasibility of these guidelines, we also developed ATHENA, a proof-of-concept CCT that dynamically adapts to developers' coding preferences and environments, ensuring seamless integration into diverse