Human Trust in AI Search: A Large-Scale Experiment

📅 2025-04-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how generative AI (GenAI) search interface design affects user trust and its boundary conditions. Leveraging a large-scale, preregistered, cross-cultural online randomized controlled trial across seven countries—tracking 12,000 real-world searches and 80,000 live search results—we causally identify differential effects of GenAI interface elements on trust: citation links significantly increase trust (even when hallucinations are present), whereas explicit confidence scores paradoxically decrease it; social feedback exhibits bidirectional moderation. Trust is robustly moderated by topic domain, demographic characteristics, and prior AI experience, and reliably predicts actual click-through and dwell behavior. Results demonstrate that overall trust in GenAI search remains lower than in traditional search systems. This work provides the first large-scale empirical foundation—and mechanistic explanation—for designing trustworthy GenAI search interfaces.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) increasingly power generative search engines which, in turn, drive human information seeking and decision making at scale. The extent to which humans trust generative artificial intelligence (GenAI) can therefore influence what we buy, how we vote and our health. Unfortunately, no work establishes the causal effect of generative search designs on human trust. Here we execute ~12,000 search queries across seven countries, generating ~80,000 real-time GenAI and traditional search results, to understand the extent of current global exposure to GenAI search. We then use a preregistered, randomized experiment on a large study sample representative of the U.S. population to show that while participants trust GenAI search less than traditional search on average, reference links and citations significantly increase trust in GenAI, even when those links and citations are incorrect or hallucinated. Uncertainty highlighting, which reveals GenAI's confidence in its own conclusions, makes us less willing to trust and share generative information whether that confidence is high or low. Positive social feedback increases trust in GenAI while negative feedback reduces trust. These results imply that GenAI designs can increase trust in inaccurate and hallucinated information and reduce trust when GenAI's certainty is made explicit. Trust in GenAI varies by topic and with users' demographics, education, industry employment and GenAI experience, revealing which sub-populations are most vulnerable to GenAI misrepresentations. Trust, in turn, predicts behavior, as those who trust GenAI more click more and spend less time evaluating GenAI search results. These findings suggest directions for GenAI design to safely and productively address the AI"trust gap."
Problem

Research questions and friction points this paper is trying to address.

Investigates how generative AI search designs affect human trust
Examines impact of citations and uncertainty on GenAI trust levels
Identifies demographic factors influencing vulnerability to AI misinformation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reference links and citations boost GenAI trust
Uncertainty highlighting reduces GenAI trust
Social feedback influences GenAI trust levels
🔎 Similar Papers
2024-07-22arXiv.orgCitations: 1