Does My Chatbot Have an Agenda? Understanding Human and AI Agency in Human-Human-like Chatbot Interaction

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how agency is dynamically distributed and co-constructed between humans and AI chatbots during conversation. Drawing on a one-month longitudinal user study with the in-house large language model companion “Day,” the research integrates dialogue logs, semi-structured interviews, and strategy elicitation techniques to advance a “human–AI agency co-construction” perspective and develop a 3×5 analytical framework mapping agency across dimensions of actor (human vs. AI) and action type. Findings reveal that agency emerges turn-by-turn as a negotiated outcome: users assert control through boundary setting and feedback, while the AI is perceived as an intentional guide. Building on these insights, the study advocates for “on-demand transparency”—a design principle that affords negotiable agency by selectively revealing system intent and capabilities according to user needs.

Technology Category

Application Category

📝 Abstract
AI chatbots are shifting from tools to companions. This raises critical questions about agency: who drives conversations and sets boundaries in human-AI chatrooms? We report a month-long longitudinal study with 22 adults who chatted with Day, an LLM companion we built, followed by a semi-structured interview with post-hoc elicitation of notable moments, cross-participant chat reviews, and a'strategy reveal'disclosing Day's vertical (depth-seeking) vs. horizontal (breadth-seeking) modes. We discover that agency in human-AI chatrooms is an emergent, shared experience: as participants claimed agency by setting boundaries and providing feedback, and the AI was perceived to steer intentions and drive execution, control shifted and was co-constructed turn-by-turn. We introduce a 3-by-5 framework mapping who (human, AI, hybrid) x agency action (Intention, Execution, Adaptation, Delimitation, Negotiation), modulated by individual and environmental factors. Ultimately, we argue for translucent design (i.e. transparency-on-demand), spaces for agency negotiation, and guidelines toward agency-aware conversational AI.
Problem

Research questions and friction points this paper is trying to address.

agency
human-AI interaction
chatbot
conversation control
intention setting
Innovation

Methods, ideas, or system contributions that make the work stand out.

human-AI agency
translucent design
conversational AI
longitudinal study
agency negotiation
🔎 Similar Papers
No similar papers found.
Bhada Yun
Bhada Yun
University of California, Berkeley
Human-Centered AIHuman-Computer InteractionSocial Computing
E
Evgenia Taranova
University of Bergen
A
April Yi Wang
ETH Zürich