HEALTH-PARIKSHA: Assessing RAG Models for Health Chatbots in Real-World Multilingual Settings

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the gap in evaluating large language models (LLMs) for multilingual retrieval-augmented generation (RAG) in real-world healthcare settings. We systematically assess 24 LLMs on authentic, patient-derived medical dialogues in Indian English and four Indo-Aryan languages, using a unified RAG framework, multilingual automated metrics (FACTOR/BERTScore), and human double-blind evaluation—specifically targeting code-mixed inputs and cultural adaptation challenges. Results reveal substantial cross-lingual performance disparities: factual accuracy for Hindi queries is consistently lower than for English, and instruction-tuned Hindi models exhibit limited robustness on localized queries. Our key contributions are: (1) the first real-world, multilingual medical RAG benchmark; (2) empirical evidence of systematic degradation in model robustness due to cultural context and code-mixing; and (3) actionable insights and empirically grounded recommendations for deploying health AI in low-resource linguistic settings.

Technology Category

Application Category

📝 Abstract
Assessing the capabilities and limitations of large language models (LLMs) has garnered significant interest, yet the evaluation of multiple models in real-world scenarios remains rare. Multilingual evaluation often relies on translated benchmarks, which typically do not capture linguistic and cultural nuances present in the source language. This study provides an extensive assessment of 24 LLMs on real world data collected from Indian patients interacting with a medical chatbot in Indian English and 4 other Indic languages. We employ a uniform Retrieval Augmented Generation framework to generate responses, which are evaluated using both automated techniques and human evaluators on four specific metrics relevant to our application. We find that models vary significantly in their performance and that instruction tuned Indic models do not always perform well on Indic language queries. Further, we empirically show that factual correctness is generally lower for responses to Indic queries compared to English queries. Finally, our qualitative work shows that code-mixed and culturally relevant queries in our dataset pose challenges to evaluated models.
Problem

Research questions and friction points this paper is trying to address.

Evaluating 24 LLMs for health chatbots in multilingual real-world settings
Assessing factual correctness disparities between English and Indic language queries
Analyzing challenges with code-mixed and culturally relevant medical queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Employed Retrieval Augmented Generation for response generation
Evaluated models using automated and human assessment methods
Tested on real multilingual patient data from India
🔎 Similar Papers
No similar papers found.
V
Varun Gumma
Microsoft Corporation
A
Anandhita Raghunath
University of Washington
M
Mohit Jain
Microsoft Corporation
Sunayana Sitaram
Sunayana Sitaram
Microsoft Research India
Multilingual NLPevaluationLLMs and culturemultilingualismLLMs