The Agony of Opacity: Foundations for Reflective Interpretability in AI-Mediated Mental Health Support

📅 2025-12-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In AI-delivered psychological support, users experiencing high distress are prone to misinterpret model outputs or exhibit unwarranted trust due to the “black-box” nature of AI systems, posing tangible clinical and ethical risks. Method: We introduce the novel framework of *reflective explainability*, the first to integrate cognitive characteristics of distressed states from clinical psychology, healthcare ethics principles (e.g., autonomy, informed consent), and AI explainability theory—yielding a layered, crisis-intervention–oriented explainability paradigm. Our approach synthesizes multi-source clinical decision logic (e.g., psychotherapy, crisis intervention, psychiatry, insurance authorization) and synergistically combines interface-level enhancements with decision-provenance techniques. Contribution: We systematically establish the distinct necessity of explainability under high-distress conditions; propose four transferable, clinically grounded explanation strategies; and uncover structural tensions among explainability, therapeutic efficacy, privacy preservation, and accountability attribution.

Technology Category

Application Category

📝 Abstract
Throughout history, a prevailing paradigm in mental healthcare has been one in which distressed people may receive treatment with little understanding around how their experience is perceived by their care provider, and in turn, the decisions made by their provider around how treatment will progress. Paralleling this offline model of care, people who seek mental health support from AI chatbots are similarly provided little context for how their expressions of distress are processed by the model, and subsequently, the logic that may underlie model responses. People in severe distress who turn to AI chatbots for support thus find themselves caught between black boxes, with unique forms of agony that arise from these intersecting opacities, including misinterpreting model outputs or attributing greater capabilities to a model than are yet possible, which has led to documented real-world harms. Building on empirical research from clinical psychology and AI safety, alongside rights-oriented frameworks from medical ethics, we describe how the distinct psychological state induced by severe distress can influence chatbot interaction patterns, and argue that this state of mind (combined with differences in how a user might perceive a chatbot compared to a care provider) uniquely necessitates a higher standard of interpretability in comparison to general AI chatbot use. Drawing inspiration from newer interpretable treatment paradigms, we then describe specific technical and interface design approaches that could be used to adapt interpretability strategies from four specific mental health fields (psychotherapy, community-based crisis intervention, psychiatry, and care authorization) to AI models, including consideration of the role of interpretability in the treatment process and tensions that may arise with greater interpretability.
Problem

Research questions and friction points this paper is trying to address.

Addresses opacity in AI mental health chatbots
Explores interpretability needs for distressed users
Proposes design adaptations from clinical fields
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts interpretability strategies from mental health fields
Integrates clinical psychology with AI safety principles
Proposes interface designs for transparent AI interactions
🔎 Similar Papers
No similar papers found.
S
Sachin R. Pendse
University of California, San Francisco (UCSF), San Francisco, United States
Darren Gergle
Darren Gergle
Professor, Northwestern University
human-computer interactioncomputer-mediated communicationsocial computingHCICSCW
Rachel Kornfield
Rachel Kornfield
Northwestern University
Health CommunicationComputer-mediated communicationHuman-Computer InteractionDigital Mental Health
K
Kaylee Kruzan
Feinberg School of Medicine, Northwestern University, Chicago, United States
D
David Mohr
Feinberg School of Medicine, Northwestern University, Chicago, United States
J
Jessica Schleider
Feinberg School of Medicine, Northwestern University, Chicago, United States
Jina Suh
Jina Suh
Microsoft Research, University of Washington
machine learninghuman computer interactionmental health
A
Annie Wescott
Feinberg School of Medicine, Northwestern University, Chicago, United States
J
Jonah Meyerhoff
Feinberg School of Medicine, Northwestern University, Chicago, United States