π€ AI Summary
This study addresses the challenge of missing modalities in clinical neuroimaging that hinders multimodal diagnosis of Alzheimerβs disease. To this end, the authors propose ACADiff, a framework based on latent diffusion models that progressively denoises and fuses structural MRI (sMRI), FDG-PET, and AV45-PET data along with clinical metadata in a latent space through an adaptive clinical-aware mechanism. The method innovatively incorporates a dynamic adaptive fusion strategy and semantic clinical prompts encoded by GPT-4o, enabling, for the first time, bidirectional generation across all three modalities under arbitrary missingness patterns. Evaluated on the ADNI dataset, ACADiff significantly outperforms existing approaches, maintaining high image fidelity and stable diagnostic performance even when up to 80% of modalities are missing.
π Abstract
Multimodal neuroimaging provides complementary insights for Alzheimer's disease diagnosis, yet clinical datasets frequently suffer from missing modalities. We propose ACADiff, a framework that synthesizes missing brain imaging modalities through adaptive clinical-aware diffusion. ACADiff learns mappings between incomplete multimodal observations and target modalities by progressively denoising latent representations while attending to available imaging data and clinical metadata. The framework employs adaptive fusion that dynamically reconfigures based on input availability, coupled with semantic clinical guidance via GPT-4o-encoded prompts. Three specialized generators enable bidirectional synthesis among sMRI, FDG-PET, and AV45-PET. Evaluated on ADNI subjects, ACADiff achieves superior generation quality and maintains robust diagnostic performance even under extreme 80\% missing scenarios, outperforming all existing baselines. To promote reproducibility, code is available at https://github.com/rongzhou7/ACADiff