🤖 AI Summary
Prior evaluations of large language models (LLMs) in health consultation lack ecological validity, relying heavily on synthetic or decontextualized data rather than real-world user queries.
Method: We introduce the first in-the-wild evaluation paradigm grounded in crowdsourced, authentic health questions from 34 participants (212 natural-language queries), submitted to four publicly available LLMs (GPT-4, Claude, Llama, etc.). Responses underwent blinded clinical review by nine licensed physicians; we further conducted physician-guided RAG augmentation and in-depth qualitative interviews.
Contribution/Results: Physician review revealed 76% diagnostic accuracy overall. RAG significantly improved response reliability (p < 0.01), while qualitative analysis uncovered critical failure modes—including overconfidence, omission of differential diagnosis, and misleading exclusionary statements. This work pioneers the integration of clinical plausibility assessment, multi-expert blinded evaluation, and empirical RAG validation—establishing a reproducible methodological framework and actionable improvement pathways for safe LLM deployment in high-stakes healthcare settings.
📝 Abstract
The proliferation of Large Language Models (LLMs) in high-stakes applications such as medical (self-)diagnosis and preliminary triage raises significant ethical and practical concerns about the effectiveness, appropriateness, and possible harmfulness of the use of these technologies for health-related concerns and queries. Some prior work has considered the effectiveness of LLMs in answering expert-written health queries/prompts, questions from medical examination banks, or queries based on pre-existing clinical cases. Unfortunately, these existing studies completely ignore an in-the-wild evaluation of the effectiveness of LLMs in answering everyday health concerns and queries typically asked by general users, which corresponds to the more prevalent use case for LLMs. To address this research gap, this paper presents the findings from a university-level competition that leveraged a novel, crowdsourced approach for evaluating the effectiveness of LLMs in answering everyday health queries. Over the course of a week, a total of 34 participants prompted four publicly accessible LLMs with 212 real (or imagined) health concerns, and the LLM generated responses were evaluated by a team of nine board-certified physicians. At a high level, our findings indicate that on average, 76% of the 212 LLM responses were deemed to be accurate by physicians. Further, with the help of medical professionals, we investigated whether RAG versions of these LLMs (powered with a comprehensive medical knowledge base) can improve the quality of responses generated by LLMs. Finally, we also derive qualitative insights to explain our quantitative findings by conducting interviews with seven medical professionals who were shown all the prompts in our competition. This paper aims to provide a more grounded understanding of how LLMs perform in real-world everyday health communication.