🤖 AI Summary
This study investigates how AI confidence influences human self-confidence and calibration mechanisms in human-AI collaborative decision-making. Using a randomized controlled experiment with a behavioral decision-making paradigm, we demonstrate for the first time that humans actively align their self-confidence with AI confidence—a phenomenon persisting even after AI withdrawal, thereby undermining individuals’ trust in their own judgment accuracy. Crucially, we show that real-time correctness feedback significantly attenuates this alignment effect (by ~37%, *p* < 0.01). The study’s key contribution lies in uncovering the “covert guidance” role of AI confidence—its capacity to implicitly shape human metacognitive judgments—and revealing its cross-contextual persistence. These findings provide critical empirical evidence and theoretical insights into the dynamics of human-AI trust, informing the design of more effective, calibrated AI-assisted decision-support systems.
📝 Abstract
Complementary collaboration between humans and AI is essential for human-AI decision making. One feasible approach to achieving it involves accounting for the calibrated confidence levels of both AI and users. However, this process would likely be made more difficult by the fact that AI confidence may influence users' self-confidence and its calibration. To explore these dynamics, we conducted a randomized behavioral experiment. Our results indicate that in human-AI decision-making, users' self-confidence aligns with AI confidence and such alignment can persist even after AI ceases to be involved. This alignment then affects users' self-confidence calibration. We also found the presence of real-time correctness feedback of decisions reduced the degree of alignment. These findings suggest that users' self-confidence is not independent of AI confidence, which practitioners aiming to achieve better human-AI collaboration need to be aware of. We call for research focusing on the alignment of human cognition and behavior with AI.