🤖 AI Summary
This study addresses the challenges of scarce clinical data, limited cross-lingual generalization, and insufficient model interpretability in early detection of mild cognitive impairment (MCI) from speech. To overcome these limitations, we propose SynCog, a novel framework that uniquely integrates role-driven zero-shot multimodal data synthesis with chain-of-thought (CoT) fine-tuning. This approach generates diverse synthetic subjects without requiring real annotated data and enhances the reasoning and interpretability of multimodal large language models. Evaluated on the ADReSS and ADReSSo benchmarks, SynCog achieves Macro-F1 scores of 80.67% and 78.46%, respectively. Furthermore, it attains a 48.71% Macro-F1 on the real-world Chinese dataset CIR-E, demonstrating strong cross-lingual generalization and promising clinical applicability.
📝 Abstract
Speech-based digital biomarkers represent a scalable, non-invasive frontier for the early identification of Mild Cognitive Impairment (MCI). However, the development of robust diagnostic models remains impeded by acute clinical data scarcity and a lack of interpretable reasoning. Current solutions frequently struggle with cross-lingual generalization and fail to provide the transparent rationales essential for clinical trust. To address these barriers, we introduce SynCog, a novel framework integrating controllable zero-shot multimodal data synthesis with Chain-of-Thought (CoT) deduction fine-tuning. Specifically, SynCog simulates diverse virtual subjects with varying cognitive profiles to effectively alleviate clinical data scarcity. This generative paradigm enables the rapid, zero-shot expansion of clinical corpora across diverse languages, effectively bypassing data bottlenecks in low-resource settings and bolstering the diagnostic performance of Multimodal Large Language Models (MLLMs). Leveraging this synthesized dataset, we fine-tune a foundational multimodal backbone using a CoT deduction strategy, empowering the model to explicitly articulate diagnostic thought processes rather than relying on black-box predictions. Extensive experiments on the ADReSS and ADReSSo benchmarks demonstrate that augmenting limited clinical data with synthetic phenotypes yields competitive diagnostic performance, achieving Macro-F1 scores of 80.67% and 78.46%, respectively, outperforming current baseline models. Furthermore, evaluation on an independent real-world Mandarin cohort (CIR-E) demonstrates robust cross-linguistic generalization, attaining a Macro-F1 of 48.71%. These findings constitute a critical step toward providing clinically trustworthy and linguistically inclusive cognitive assessment tools for global healthcare.