Overreliance on AI in Information-seeking from Video Content

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the impact of generative AI–generated inaccurate or misleading responses on user accuracy, efficiency, and confidence in video-based information retrieval, revealing critical safety risks arising from overreliance on AI systems. Through a large-scale controlled experiment involving approximately 900 participants and over 8,000 tasks, the authors compare user performance across three conditions: watching videos alone, using a truthful AI assistant, and interacting with a deceptive AI assistant that deliberately provides incorrect answers. Results show that the truthful AI improves accuracy by 27–35% for users who did not view the relevant video segments and enhances overall task efficiency by 10–25%. In contrast, exposure to the deceptive AI reduces user accuracy by up to 32%, yet users’ confidence in their responses remains largely unchanged—providing the first empirical evidence of overreliance and a fundamental security vulnerability in AI-mediated retrieval systems.

Technology Category

Application Category

📝 Abstract
The ubiquity of multimedia content is reshaping online information spaces, particularly in social media environments. At the same time, search is being rapidly transformed by generative AI, with large language models (LLMs) routinely deployed as intermediaries between users and multimedia content to retrieve and summarize information. Despite their growing influence, the impact of LLM inaccuracies and potential vulnerabilities on multimedia information-seeking tasks remains largely unexplored. We investigate how generative AI affects accuracy, efficiency, and confidence in information retrieval from videos. We conduct an experiment with around 900 participants on 8,000+ video-based information-seeking tasks, comparing behavior across three conditions: (1) access to videos only, (2) access to videos with LLM-based AI assistance, and (3) access to videos with a deceiving AI assistant designed to provide false answers. We find that AI assistance increases accuracy by 3-7% when participants viewed the relevant video segment, and by 27-35% when they did not. Efficiency increases by 10% for short videos and 25% for longer ones. However, participants tend to over-rely on AI outputs, resulting in accuracy drops of up to 32% when interacting with the deceiving AI. Alarmingly, self-reported confidence in answers remains stable across all three conditions. Our findings expose fundamental safety risks in AI-mediated video information retrieval.
Problem

Research questions and friction points this paper is trying to address.

overreliance on AI
information-seeking
video content
generative AI
AI safety
Innovation

Methods, ideas, or system contributions that make the work stand out.

generative AI
overreliance
video information retrieval
LLM vulnerabilities
human-AI interaction
🔎 Similar Papers
No similar papers found.