🤖 AI Summary
Despite their cost-efficiency and reduced bias, robot advisors (RAs) suffer from low user adoption, primarily due to ambiguous role perception and unclear mechanisms for integrating algorithmic advice.
Method: This study employs a multi-stage mixed-methods approach (N=334 behavioral experiments, thematic analysis, and quantitative validation) to systematically examine how information presentation formats and performance framing influence user reliance on RAs.
Contribution/Results: We propose a novel tripartite RA role taxonomy—tool, collaborator, and authority—and identify four distinct user archetypes, culminating in a dual-dimensional 2×2 “individual–algorithm” acceptance model. Results show that while users generally rely on RAs, reliance is significantly moderated by performance information salience and gain/loss framing of recommendations. Key antecedents of adoption—both individual (e.g., cognitive style, trust disposition) and algorithmic (e.g., transparency, explainability of performance)—are empirically identified, offering theoretical grounding and actionable design principles for human–algorithm collaborative decision-making.
📝 Abstract
Robo-advisors (RAs) are cost-effective, bias-resistant alternatives to human financial advisors, yet adoption remains limited. While prior research has examined user interactions with RAs, less is known about how individuals interpret RA roles and integrate their advice into decision-making. To address this gap, this study employs a multiphase mixed methods design integrating a behavioral experiment (N = 334), thematic analysis, and follow-up quantitative testing. Findings suggest that people tend to rely on RAs, with reliance shaped by information about RA performance and the framing of advice as gains or losses. Thematic analysis reveals three RA roles in decision-making and four user types, each reflecting distinct patterns of advice integration. In addition, a 2 x 2 typology categorizes antecedents of acceptance into enablers and inhibitors at both the individual and algorithmic levels. By combining behavioral, interpretive, and confirmatory evidence, this study advances understanding of human-RA collaboration and provides actionable insights for designing more trustworthy and adaptive RA systems.