Hierarchical Self-Supervised Representation Learning for Depression Detection from Speech

📅 2025-10-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Speech-based depression detection faces challenges including sparse depressive cues, irregular temporal dynamics, and underutilization of multi-level acoustic representations. To address these, we propose HAREN-CTC—a novel framework that (1) dynamically fuses **hierarchical representations** from self-supervised speech models (e.g., WavLM) via a **cross-layer attention mechanism**, and (2) incorporates **Connectionist Temporal Classification (CTC) loss** to enable robust temporal alignment and capture subtle, persistent depressive prosodic patterns. The method integrates multi-task learning with a hierarchical feature complementarity mechanism, mitigating overfitting from reliance on single-layer features. Evaluated on DAIC-WOZ and MODMA, HAREN-CTC achieves state-of-the-art macro-F1 scores of 0.81 and 0.82, respectively, demonstrating superior accuracy and strong cross-dataset generalizability.

Technology Category

Application Category

📝 Abstract
Speech-based depression detection (SDD) is a promising, non-invasive alternative to traditional clinical assessments. However, it remains limited by the difficulty of extracting meaningful features and capturing sparse, heterogeneous depressive cues over time. Pretrained self-supervised learning (SSL) models such as WavLM provide rich, multi-layer speech representations, yet most existing SDD methods rely only on the final layer or search for a single best-performing one. These approaches often overfit to specific datasets and fail to leverage the full hierarchical structure needed to detect subtle and persistent depression signals. To address this challenge, we propose HAREN-CTC, a novel architecture that integrates multi-layer SSL features using cross-attention within a multitask learning framework, combined with Connectionist Temporal Classification loss to handle sparse temporal supervision. HAREN-CTC comprises two key modules: a Hierarchical Adaptive Clustering module that reorganizes SSL features into complementary embeddings, and a Cross-Modal Fusion module that models inter-layer dependencies through cross-attention. The CTC objective enables alignment-aware training, allowing the model to track irregular temporal patterns of depressive speech cues. We evaluate HAREN-CTC under both an upper-bound setting with standard data splits and a generalization setting using five-fold cross-validation. The model achieves state-of-the-art macro F1-scores of 0.81 on DAIC-WOZ and 0.82 on MODMA, outperforming prior methods across both evaluation scenarios.
Problem

Research questions and friction points this paper is trying to address.

Extracting meaningful depression features from sparse speech cues
Leveraging hierarchical SSL representations beyond single-layer approaches
Handling irregular temporal patterns in depressive speech detection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses multi-layer SSL features with cross-attention
Integrates hierarchical clustering and temporal classification
Models inter-layer dependencies for depression detection
🔎 Similar Papers
No similar papers found.
Y
Yuxin Li
College of Computing and Data Science, Nanyang Technological University, Singapore
E
Eng Siong Chng
College of Computing and Data Science, Nanyang Technological University, Singapore
Cuntai Guan
Cuntai Guan
President's Chair Professor, CCDS, Nanyang Technological University
Brain-Computer InterfaceBrain-Computer InterfacesMachine LearningArtificial Intelligence