🤖 AI Summary
Existing multimodal intent recognition methods suffer from strong modality dependence and insufficient capability to model fine-grained semantic relationships, hindering complex intent understanding. To address this, we propose an LLM-guided semantic relation reasoning framework. First, a large language model (LLM) automatically generates interpretable semantic cues. Second, a hierarchical Chain-of-Thought (CoT) reasoning mechanism—progressing from coarse to fine granularity—dynamically identifies salient semantic elements. Third, leveraging logical principles, we construct three types of semantic relations—causal, temporal, and dependency—enabling unsupervised semantic discovery and adaptive ranking without manual priors. Evaluated on multimodal intent recognition and dialogue act recognition tasks, our method achieves significant improvements over state-of-the-art approaches, demonstrating superior effectiveness and generalizability in fine-grained semantic modeling and cross-modal relational reasoning.
📝 Abstract
Understanding human intents from multimodal signals is critical for analyzing human behaviors and enhancing human-machine interactions in real-world scenarios. However, existing methods exhibit limitations in their modality-level reliance, constraining relational reasoning over fine-grained semantics for complex intent understanding. This paper proposes a novel LLM-Guided Semantic Relational Reasoning (LGSRR) method, which harnesses the expansive knowledge of large language models (LLMs) to establish semantic foundations that boost smaller models' relational reasoning performance. Specifically, an LLM-based strategy is proposed to extract fine-grained semantics as guidance for subsequent reasoning, driven by a shallow-to-deep Chain-of-Thought (CoT) that autonomously uncovers, describes, and ranks semantic cues by their importance without relying on manually defined priors. Besides, we formally model three fundamental types of semantic relations grounded in logical principles and analyze their nuanced interplay to enable more effective relational reasoning. Extensive experiments on multimodal intent and dialogue act recognition tasks demonstrate LGSRR's superiority over state-of-the-art methods, with consistent performance gains across diverse semantic understanding scenarios. The complete data and code are available at https://github.com/thuiar/LGSRR.