Large Language Models in Mental Health Care: a Scoping Review

📅 2024-01-01
🏛️ arXiv.org
📈 Citations: 44
Influential: 1
📄 PDF
🤖 AI Summary
This study systematically evaluates the current applications, challenges, and potential of large language models (LLMs) in mental health from 2019 to 2023. Addressing the lack of consolidated empirical evidence, we conducted a cross-platform systematic review across six repositories (e.g., PubMed, arXiv, medRxiv) using mixed inclusion criteria, yielding 34 high-quality empirical studies from an initial pool of 313 publications—the first structured evidence map of LLMs in psychiatry and mental health. Results indicate that LLMs enhance accuracy and accessibility in depression and anxiety screening, yet face critical barriers: limited clinical validity, absence of standardized evaluation protocols, insufficient affective reasoning capabilities, and unaddressed ethical governance gaps. To bridge these gaps, we propose a tripartite implementation framework integrating interdisciplinary collaboration, a standardized assessment methodology, and development of high-quality, clinically annotated datasets—establishing a methodological foundation and practical roadmap for LLM-driven mental health intelligence.

Technology Category

Application Category

📝 Abstract
The integration of large language models (LLMs) in mental health care is an emerging field. There is a need to systematically review the application outcomes and delineate the advantages and limitations in clinical settings. This review aims to provide a comprehensive overview of the use of LLMs in mental health care, assessing their efficacy, challenges, and potential for future applications. A systematic search was conducted across multiple databases including PubMed, Web of Science, Google Scholar, arXiv, medRxiv, and PsyArXiv in November 2023. All forms of original research, peer-reviewed or not, published or disseminated between October 1, 2019, and December 2, 2023, are included without language restrictions if they used LLMs developed after T5 and directly addressed research questions in mental health care settings. From an initial pool of 313 articles, 34 met the inclusion criteria based on their relevance to LLM application in mental health care and the robustness of reported outcomes. Diverse applications of LLMs in mental health care are identified, including diagnosis, therapy, patient engagement enhancement, etc. Key challenges include data availability and reliability, nuanced handling of mental states, and effective evaluation methods. Despite successes in accuracy and accessibility improvement, gaps in clinical applicability and ethical considerations were evident, pointing to the need for robust data, standardized evaluations, and interdisciplinary collaboration. LLMs hold substantial promise for enhancing mental health care. For their full potential to be realized, emphasis must be placed on developing robust datasets, development and evaluation frameworks, ethical guidelines, and interdisciplinary collaborations to address current limitations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating effectiveness of LLMs in mental health care
Identifying challenges in LLM applications for mental health
Exploring future potential of LLMs in mental health
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic search across multiple databases
Included studies employing LLMs post-T5
Identified applications in diagnostics and therapy
🔎 Similar Papers
No similar papers found.