Evaluating Strategies for Synthesizing Clinical Notes for Medical Multimodal AI

📅 2025-11-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical multimodal AI is hindered by the scarcity of high-quality heterogeneous data, particularly in dermatology, where image datasets lack rich clinical textual annotations—limiting model robustness and generalization. To address this, we propose a fine-tuning-free prompt engineering framework that leverages structured medical metadata (e.g., lesion location, age, sex) to guide large language models in generating high-fidelity, low-hallucination synthetic clinical notes. This approach is the first to enable image-to-text cross-modal retrieval solely through prompt design. Evaluated across multiple dermatological benchmarks, the synthetic notes significantly improve multimodal classification accuracy (+3.2–7.8%), with even greater gains under domain shift. Our core innovation lies in embedding clinical priors directly into the prompting mechanism—ensuring both clinical plausibility and modeling efficacy—thereby bridging the modality gap without architectural modification or parameter updates.

Technology Category

Application Category

📝 Abstract
Multimodal (MM) learning is emerging as a promising paradigm in biomedical artificial intelligence (AI) applications, integrating complementary modality, which highlight different aspects of patient health. The scarcity of large heterogeneous biomedical MM data has restrained the development of robust models for medical AI applications. In the dermatology domain, for instance, skin lesion datasets typically include only images linked to minimal metadata describing the condition, thereby limiting the benefits of MM data integration for reliable and generalizable predictions. Recent advances in Large Language Models (LLMs) enable the synthesis of textual description of image findings, potentially allowing the combination of image and text representations. However, LLMs are not specifically trained for use in the medical domain, and their naive inclusion has raised concerns about the risk of hallucinations in clinically relevant contexts. This work investigates strategies for generating synthetic textual clinical notes, in terms of prompt design and medical metadata inclusion, and evaluates their impact on MM architectures toward enhancing performance in classification and cross-modal retrieval tasks. Experiments across several heterogeneous dermatology datasets demonstrate that synthetic clinical notes not only enhance classification performance, particularly under domain shift, but also unlock cross-modal retrieval capabilities, a downstream task that is not explicitly optimized during training.
Problem

Research questions and friction points this paper is trying to address.

Generating synthetic clinical notes using LLMs for multimodal medical AI
Evaluating strategies to reduce hallucinations in synthetic medical text
Enhancing classification and cross-modal retrieval in dermatology datasets
Innovation

Methods, ideas, or system contributions that make the work stand out.

Synthetic clinical notes generation via prompt design
Medical metadata integration for multimodal learning
Cross-modal retrieval enhancement without explicit training optimization
🔎 Similar Papers
No similar papers found.
N
Niccolo Marini
Division of Intramural Research, National Library of Medicine, National Institutes of Health
Z
Zhaohui Liang
Division of Intramural Research, National Library of Medicine, National Institutes of Health
Sivaramakrishnan Rajaraman
Sivaramakrishnan Rajaraman
Research Scientist@Guidehouse Digital LLC@National Library of Medicine, NIH
Deep LearningMachine LearningImage ProcessingArtificial IntelligenceCADx Tools
Z
Zhiyun Xue
Division of Intramural Research, National Library of Medicine, National Institutes of Health
Sameer Antani
Sameer Antani
National Library of Medicine, National Institutes of Health
Medical ImagingMachine LearningArtificial IntelligenceImage InformaticsVisual Information