🤖 AI Summary
This study investigates whether human involvement in AI-driven decision-making enhances downstream consumer welfare. Method: Leveraging a large-scale randomized controlled field experiment conducted at a major European savings bank, we compare the impact of fully automated AI-generated versus human-AI collaborative investment advice on customer adoption behavior and material welfare outcomes. Contribution/Results: We find that human participation does not improve the objective quality of recommendations but functions as a peripheral cue—significantly increasing their affective appeal and perceived credibility, especially in high-risk decisions. Consequently, adoption rates rise substantially, yielding measurable welfare gains. This is the first study to provide causal evidence that the value of human-AI collaboration stems not from quality gatekeeping but from psychological perception modulation. Based on these findings, we propose a novel, consumer-welfare-centered paradigm for human-AI co-design, offering both empirical grounding and theoretical advancement for AI governance and service design.
📝 Abstract
Amid ongoing policy and managerial debates on keeping humans in the loop of AI decision-making, we investigate whether human involvement in AI-based service production benefits downstream consumers. Partnering with a large savings bank in Europe, we produced pure AI and human-AI collaborative investment advice, passed it to customers, and examined their advice-taking in a field experiment. On the production side, contrary to concerns that humans might inefficiently override AI output, we find that giving a human banker the final say over AI-generated financial advice does not compromise its quality. More importantly, on the consumption side, customers are more likely to follow investment advice from the human-AI collaboration compared to pure AI, especially when facing riskier decisions. In our setting, this increased reliance leads to higher material welfare for consumers. Additional analyses from the field experiment and an online experiment show that the persuasive power of human-AI advice cannot be explained by consumers' beliefs about enhanced advice quality due to human-AI complementarities. Instead, the benefit stems from human involvement acting as a peripheral cue that increases the advice's affective appeal. Our findings suggest that regulations and guidelines should adopt a consumer-centered approach by fostering service environments in which humans and AI systems can collaborate to improve consumer outcomes. These insights are relevant for managers designing AI-based services and for policymakers advocating for human oversight in AI systems.