🤖 AI Summary
In AI-assisted decision-making, users often exhibit overreliance or underreliance on AI recommendations due to trust miscalibration. To address this, we propose a real-time trust-aware adaptive intervention framework that dynamically models user trust levels and triggers context-sensitive interventions—such as supportive explanations, counterfactual explanations, or mandatory pauses—depending on whether trust is classified as low or high. Integrating trust modeling, explainable AI (XAI), and human-AI interaction experimental design, our framework constitutes the first adaptive intervention engine explicitly targeting reliance calibration. Evaluated on scientific question-answering and medical diagnosis tasks, it reduces inappropriate reliance by 38% and improves decision accuracy by 20%, significantly enhancing human-AI collaborative performance. The core innovation lies in explicitly modeling trust state as the primary trigger for intervention, enabling precise, dynamic calibration of user reliance behavior.
📝 Abstract
Trust biases how users rely on AI recommendations in AI-assisted decision-making tasks, with low and high levels of trust resulting in increased under- and over-reliance, respectively. We propose that AI assistants should adapt their behavior through trust-adaptive interventions to mitigate such inappropriate reliance. For instance, when user trust is low, providing an explanation can elicit more careful consideration of the assistant's advice by the user. In two decision-making scenarios -- laypeople answering science questions and doctors making medical diagnoses -- we find that providing supporting and counter-explanations during moments of low and high trust, respectively, yields up to 38% reduction in inappropriate reliance and 20% improvement in decision accuracy. We are similarly able to reduce over-reliance by adaptively inserting forced pauses to promote deliberation. Our results highlight how AI adaptation to user trust facilitates appropriate reliance, presenting exciting avenues for improving human-AI collaboration.