🤖 AI Summary
When AI systems lack intrinsic interpretability, users struggle to calibrate appropriate trust and effectively utilize AI-generated decision recommendations, thereby diminishing the efficacy of human-AI collaboration.
Method: This paper introduces the first dynamic adaptation framework that leverages large language models (LLMs) to generate natural-language analyses and autonomously schedules the most effective explanatory content based on real-time human factors feedback—collected via randomized controlled experiments and computational modeling.
Contribution/Results: The framework achieves personalized explanation delivery without requiring access to or reliance on the underlying AI system’s internal interpretability mechanisms. Empirical evaluation demonstrates statistically significant improvements in both appropriate trust calibration and decision accuracy among users. Notably, it enhances AI-assisted decision-making performance even in settings where the base AI provides no native explanations.
📝 Abstract
AI-assisted decision making becomes increasingly prevalent, yet individuals often fail to utilize AI-based decision aids appropriately especially when the AI explanations are absent, potentially as they do not %understand reflect on AI's decision recommendations critically. Large language models (LLMs), with their exceptional conversational and analytical capabilities, present great opportunities to enhance AI-assisted decision making in the absence of AI explanations by providing natural-language-based analysis of AI's decision recommendation, e.g., how each feature of a decision making task might contribute to the AI recommendation. In this paper, via a randomized experiment, we first show that presenting LLM-powered analysis of each task feature, either sequentially or concurrently, does not significantly improve people's AI-assisted decision performance. To enable decision makers to better leverage LLM-powered analysis, we then propose an algorithmic framework to characterize the effects of LLM-powered analysis on human decisions and dynamically decide which analysis to present. Our evaluation with human subjects shows that this approach effectively improves decision makers' appropriate reliance on AI in AI-assisted decision making.