Truth, Trust, and Trouble: Medical AI on the Edge

📅 2025-07-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the challenge of jointly optimizing factual accuracy, helpfulness, and safety in large language models (LLMs) for medical question answering. We construct a multidimensional evaluation benchmark comprising over one thousand health-related questions. A novel three-dimensional evaluation framework—integrating honesty, helpfulness, and harmlessness—is proposed to systematically assess open-source models including Mistral-7B, BioMistral-7B-DARE, and AlpaCare-13B, while comparing few-shot prompting, domain-specific fine-tuning, and human evaluation. Results show that AlpaCare-13B achieves the highest accuracy (91.7%) and harmlessness score (0.92); BioMistral-7B-DARE attains strong safety (0.90); few-shot prompting improves overall accuracy by 7 percentage points but consistently reduces helpfulness on complex queries. Crucially, we empirically uncover trade-offs among accuracy, safety, and practical utility in clinical settings—providing the first reproducible evaluation paradigm and empirical foundation for trustworthy deployment of medical LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) hold significant promise for transforming digital health by enabling automated medical question answering. However, ensuring these models meet critical industry standards for factual accuracy, usefulness, and safety remains a challenge, especially for open-source solutions. We present a rigorous benchmarking framework using a dataset of over 1,000 health questions. We assess model performance across honesty, helpfulness, and harmlessness. Our results highlight trade-offs between factual reliability and safety among evaluated models -- Mistral-7B, BioMistral-7B-DARE, and AlpaCare-13B. AlpaCare-13B achieves the highest accuracy (91.7%) and harmlessness (0.92), while domain-specific tuning in BioMistral-7B-DARE boosts safety (0.90) despite its smaller scale. Few-shot prompting improves accuracy from 78% to 85%, and all models show reduced helpfulness on complex queries, highlighting ongoing challenges in clinical QA.
Problem

Research questions and friction points this paper is trying to address.

Ensuring medical AI meets accuracy, usefulness, and safety standards
Evaluating trade-offs between factual reliability and safety in LLMs
Improving model performance on complex clinical questions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rigorous benchmarking with 1000 health questions
Domain-specific tuning enhances model safety
Few-shot prompting boosts accuracy significantly
🔎 Similar Papers
No similar papers found.