🤖 AI Summary
Generative AI often fails in workplace settings due to its inability to effectively model the multifaceted and dynamic contexts in which users operate. This study addresses this limitation through semi-structured expert interviews and interdisciplinary theoretical analysis, revealing significant discrepancies in how developers, end users, and social scientists conceptualize “context.” The work introduces the notion of “context collapse”—a phenomenon wherein rich, multidimensional contextual information is oversimplified or conflated during computational modeling. To mitigate this issue, the research advocates shifting from static data collection toward interactive, situated practices that embed AI development within authentic work environments. By foregrounding context as a lived, evolving construct rather than a fixed input, the study provides both theoretical grounding and practical pathways for designing generative AI systems that better align with real-world user needs and organizational complexities.
📝 Abstract
As generative AI technologies are pressed into service in workplace settings, current approaches to account for the contexts in which such technologies are used fall short of users' expectations and needs. This paper empirically demonstrates, through expert interviews, both how these tools fail to account for users' context and how users deploy concrete strategies address such failures. The paper analyzes how context is variously conceptualized by tool developers, users, and social scientists to identify specific pitfalls inherent in computational approaches to context. Multiple distinct contexts tend to collapse into one another or rot, degrading over time, reducing the utility of any efforts to account for context. The paper concludes with a provocation to shift from an indiscriminate collection of context-relevant data toward a more interactional set of practices to embed GenAI systems more appropriately into users' contexts of use.