🤖 AI Summary
This work addresses the challenge of capturing subtle psychological states such as ambivalence and hesitancy in videos, which often manifest as cross-modal inconsistencies among facial expressions, vocal prosody, and textual semantics. To this end, we propose a segmented multimodal large language model framework that divides long videos into clips of no more than five seconds and leverages Qwen3-Omni-30B-A3B to fuse visual and auditory signals for fine-grained emotion recognition. Our approach is the first to integrate segment-level modeling with a multimodal large language model for ambivalence and hesitancy (AH) detection, effectively mitigating computational overhead and token limitations inherent in processing long videos while enhancing the modeling of complex emotional conflicts. Trained on the BAH dataset using the MS-Swift framework with a combination of LoRA and full-parameter fine-tuning, our model achieves an accuracy of 85.1% on the test set, significantly outperforming existing methods.
📝 Abstract
Emotion recognition in videos is a pivotal task in affective computing, where identifying subtle psychological states such as Ambivalence and Hesitancy holds significant value for behavioral intervention and digital health. Ambivalence and Hesitancy states often manifest through cross-modal inconsistencies such as discrepancies between facial expressions, vocal tones, and textual semantics, posing a substantial challenge for automated recognition. This paper proposes a recognition framework that integrates temporal segment modeling with Multimodal Large Language Models. To address computational efficiency and token constraints in long video processing, we employ a segment-based strategy, partitioning videos into short clips with a maximum duration of 5 seconds. We leverage the Qwen3-Omni-30B-A3B model, fine-tuned on the BAH dataset using LoRA and full-parameter strategies via the MS-Swift framework, enabling the model to synergistically analyze visual and auditory signals. Experimental results demonstrate that the proposed method achieves an accuracy of 85.1% on the test set, significantly outperforming existing benchmarks and validating the superior capability of Multimodal Large Language Models in capturing complex and nuanced emotional conflicts. The code is released at https://github.com/dlnn123/A-H-Detection-with-Qwen-Omni.git.