Beyond Black-Box AI: Interpretable Hybrid Systems for Dementia Care

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI—particularly large language models (LLMs)—exhibits limited clinical utility in dementia diagnosis and care, primarily due to opaque “black-box” decision-making, high hallucination risk, and weak causal reasoning, resulting in low clinician trust and no measurable improvement in diagnostic performance. To address this, we propose a human-in-the-loop neurosymbolic hybrid intelligence framework that integrates LLMs with clinical causal knowledge, rule-based reasoning engines, and explainable AI (XAI) techniques, thereby unifying predictive accuracy with decision transparency. Our key innovation lies in the deep synergistic alignment of statistical learning, symbolic inference, and domain-expert clinical workflows—enhancing both explanation consistency and operational adaptability. Empirical evaluation demonstrates significant improvements in clinicians’ comprehension of AI outputs, trust in recommendations, and rate of clinical adoption, validating a novel AI evaluation paradigm centered on clinical interpretability, workflow integration, and patient-centered outcomes.

Technology Category

Application Category

📝 Abstract
The recent boom of large language models (LLMs) has re-ignited the hope that artificial intelligence (AI) systems could aid medical diagnosis. Yet despite dazzling benchmark scores, LLM assistants have yet to deliver measurable improvements at the bedside. This scoping review aims to highlight the areas where AI is limited to make practical contributions in the clinical setting, specifically in dementia diagnosis and care. Standalone machine-learning models excel at pattern recognition but seldom provide actionable, interpretable guidance, eroding clinician trust. Adjacent use of LLMs by physicians did not result in better diagnostic accuracy or speed. Key limitations trace to the data-driven paradigm: black-box outputs which lack transparency, vulnerability to hallucinations, and weak causal reasoning. Hybrid approaches that combine statistical learning with expert rule-based knowledge, and involve clinicians throughout the process help bring back interpretability. They also fit better with existing clinical workflows, as seen in examples like PEIRS and ATHENA-CDS. Future decision-support should prioritise explanatory coherence by linking predictions to clinically meaningful causes. This can be done through neuro-symbolic or hybrid AI that combines the language ability of LLMs with human causal expertise. AI researchers have addressed this direction, with explainable AI and neuro-symbolic AI being the next logical steps in further advancement in AI. However, they are still based on data-driven knowledge integration instead of human-in-the-loop approaches. Future research should measure success not only by accuracy but by improvements in clinician understanding, workflow fit, and patient outcomes. A better understanding of what helps improve human-computer interactions is greatly needed for AI systems to become part of clinical practice.
Problem

Research questions and friction points this paper is trying to address.

AI lacks transparency in dementia diagnosis and care
Hybrid systems improve interpretability and clinical workflow
Future AI should enhance clinician understanding and patient outcomes
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hybrid AI combining statistical and rule-based knowledge
Neuro-symbolic AI integrating LLMs with human expertise
Explainable AI prioritizing clinical interpretability
🔎 Similar Papers
No similar papers found.
M
Matthew JY Kang
Neuropsychiatry Centre, The Royal Melbourne Hospital, Grattan St VIC 3052, Melbourne, Australia; Department of Psychiatry, The University of Melbourne, Grattan St VIC 3052, Melbourne, Australia
Wenli Yang
Wenli Yang
University of Tasmania
Image processingComputer VisionMachine learning/Deep learningAI
M
Monica R Roberts
Alfred Mental and Addiction Health, Alfred Health, Commercial Rd, VIC 3005
Byeong Ho Kang
Byeong Ho Kang
Professor, University of Tasmania
Expert SystemsWorld Wide WebSocial NetworksSmart factory
C
Charles B Malpas
Department of Medicine, Melbourne Medical School, University of Melbourne, Grattan St VIC 3052, Melbourne, Australia; Melbourne School of Psychological Sciences, University of Melbourne, Grattan St VIC 3052, Melbourne, Australia