🤖 AI Summary
This study addresses the safety and accountability risks arising from the opacity of algorithmic decision-making in semi-autonomous driving systems, which hinders drivers’ ability to understand, trust, and effectively collaborate with AI in complex traffic scenarios. Drawing on semi-structured interviews with 16 U.S. drivers and integrating critical data studies with algorithmic accountability theory, the research examines how drivers employ folk theories—such as anthropomorphizing AI—to interpret system behavior and respond to anomalies. The work proposes a “participatory algorithmic governance” framework that reconceptualizes drivers not merely as passive data sources but as cognitively agentic participants. It exposes critical gaps in current systems’ transparency and feedback mechanisms and offers design pathways to enhance accountability and user experience in human-AI collaboration.
📝 Abstract
As semi-autonomous vehicles (AVs) become prevalent, drivers must collaborate with AI systems whose decision-making processes remain opaque. This study examines how drivers of AVs develop folk theories to interpret algorithmic behavior that contradicts their expectations. Through 16 semi-structured interviews with drivers in the United States, we investigate the explanatory frameworks drivers construct to make sense of AI decisions, the strategies they employ when systems behave unexpectedly, and their experiences with control handoffs and feedback mechanisms. Our findings reveal that drivers develop sophisticated folk theories -- often using anthropomorphic metaphors describing systems that ``see,''``hesitate,''or become ``overwhelmed''-- yet lack informational resources to validate these theories or meaningfully participate in algorithmic governance. We identify contexts where algorithmic opacity manifests acutely, including complex intersections, adverse weather, and rural environments. Current AV designs position drivers as passive data sources rather than epistemic agents, creating accountability gaps that undermine trust and safety. Drawing on critical data studies and algorithmic accountability literature, we propose a framework for participatory algorithmic governance that would provide drivers with transparency into AI decision-making and meaningful channels for contributing to system improvement. This research contributes to understanding how users navigate datafied sociotechnical systems in safety-critical contexts.