๐ค AI Summary
This work addresses the limitations of existing approaches to multimodal mental health assessment, which often reduce the problem to coarse-grained classification tasks and fail to capture fine-grained emotional and cognitive cues or provide interpretability. To overcome these shortcomings, we introduce a novel taskโEmotion-Cognition Multimodal Captioning (ECMC)โthat generates natural language descriptions explicitly linking multimodal signals to psychological states through emotion-cognition profiles. Our method employs modality-specific encoders, a Q-former-based dual-stream BridgeNet fusion module, contrastive learning for enhanced cross-modal alignment, and a LLaMA decoder to produce semantically coherent descriptions. Experimental results demonstrate that our approach outperforms current models on both objective metrics and human evaluations, significantly improving diagnostic accuracy and interpretability in mental health assessment.
๐ Abstract
Emotional and cognitive factors are essential for understanding mental health disorders. However, existing methods often treat multi-modal data as classification tasks, limiting interpretability especially for emotion and cognition. Although large language models (LLMs) offer opportunities for mental health analysis, they mainly rely on textual semantics and overlook fine-grained emotional and cognitive cues in multi-modal inputs. While some studies incorporate emotional features via transfer learning, their connection to mental health conditions remains implicit. To address these issues, we propose ECMC, a novel task that aims at generating natural language descriptions of emotional and cognitive states from multi-modal data, and producing emotion-cognition profiles that improve both the accuracy and interpretability of mental health assessments. We adopt an encoder-decoder architecture, where modality-specific encoders extract features, which are fused by a dual-stream BridgeNet based on Q-former. Contrastive learning enhances the extraction of emotional and cognitive features. A LLaMA decoder then aligns these features with annotated captions to produce detailed descriptions. Extensive objective and subjective evaluations demonstrate that: 1) ECMC outperforms existing multi-modal LLMs and mental health models in generating emotion-cognition captions; 2) the generated emotion-cognition profiles significantly improve assistive diagnosis and interpretability in mental health analysis.