🤖 AI Summary
This study addresses the limitations of existing evaluation methods in capturing the highly individualized and dynamically evolving subjective experiences in human–AI interaction. It proposes a novel “AI phenomenology” research paradigm that integrates Husserlian phenomenology, post-phenomenology, and Actor-Network Theory through a longitudinal, multimethod empirical design centered on first-person user experiences. The project introduces a reproducible research toolkit comprising lived-experience capture instruments, three core design concepts—translucent design, embodied value alignment, and temporal co-evolution tracking—and a clearly articulated research agenda. This framework offers both theoretical grounding and practical scaffolding to understand and support the dynamic co-evolution between humans and AI systems in personal and professional contexts.
📝 Abstract
There is no 'ordinary' when it comes to AI. The human-AI experience is extraordinarily complex and specific to each person, yet dominant measures such as usability scales and engagement metrics flatten away nuance. We argue for AI phenomenology: a research stance that asks "How did it feel?" beyond the standard questions of "How well did it perform?" when interacting with AI systems. AI phenomenology acts as a paradigm for bidirectional human-AI alignment as it foregrounds users' first-person perceptions and interpretations of AI systems over time. We motivate AI phenomenology as a framework that captures how alignment is experienced, negotiated, and updated between users and AI systems. Tracing a lineage from Husserl through postphenomenology to Actor-Network Theory, and grounding our argument in three studies-two longitudinal studies with "Day", an AI companion, and a multi-method study of agentic AI in software engineering-we contribute a set of replicable methodological toolkits for conducting AI phenomenology research: instruments for capturing lived experience across personal and professional contexts, three design concepts (translucent design, agency-aware value alignment, temporal co-evolution tracking), and a concrete research agenda. We offer this toolkit not as a new paradigm but as a practical scaffold that researchers can adapt as AI systems-and the humans who live alongside them-continue to co-evolve.