Boosting Short Text Classification with Multi-Source Information Exploration and Dual-Level Contrastive Learning

📅 2025-01-16
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the classification performance bottleneck caused by semantic sparsity in short texts and scarcity of labeled data, this paper proposes MI-DELIGHT: a model that represents short texts as multi-source enhanced graphs—integrating statistical co-occurrence, linguistic structure, and knowledge facts—and employs dual-granularity contrastive learning (instance-level and cluster-level) to strengthen discriminative representation learning. Furthermore, it introduces a hierarchical task correlation architecture that explicitly models dependency relationships between primary and auxiliary tasks. Key innovations include: (i) the first multi-source information协同 graph construction mechanism, (ii) a novel dual-granularity contrastive learning paradigm, and (iii) an interpretable hierarchical task modeling framework. Extensive experiments demonstrate that MI-DELIGHT significantly outperforms state-of-the-art methods across multiple standard short-text classification benchmarks; notably, in several low-resource settings, it even surpasses mainstream large language models.

Technology Category

Application Category

📝 Abstract
Short text classification, as a research subtopic in natural language processing, is more challenging due to its semantic sparsity and insufficient labeled samples in practical scenarios. We propose a novel model named MI-DELIGHT for short text classification in this work. Specifically, it first performs multi-source information (i.e., statistical information, linguistic information, and factual information) exploration to alleviate the sparsity issues. Then, the graph learning approach is adopted to learn the representation of short texts, which are presented in graph forms. Moreover, we introduce a dual-level (i.e., instance-level and cluster-level) contrastive learning auxiliary task to effectively capture different-grained contrastive information within massive unlabeled data. Meanwhile, previous models merely perform the main task and auxiliary tasks in parallel, without considering the relationship among tasks. Therefore, we introduce a hierarchical architecture to explicitly model the correlations between tasks. We conduct extensive experiments across various benchmark datasets, demonstrating that MI-DELIGHT significantly surpasses previous competitive models. It even outperforms popular large language models on several datasets.
Problem

Research questions and friction points this paper is trying to address.

Short Text Classification
Information Scarcity
Label Data Insufficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

MI-DELIGHT
Multi-angle Information Extraction
Dual-level Contrastive Learning
🔎 Similar Papers
No similar papers found.
Yonghao Liu
Yonghao Liu
Jilin University
Graph Neural NetworkNatural Language Processing
M
Mengyu Li
Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University
W
Wei Pang
Mathematical and Computer Sciences, Heriot-Watt University
Fausto Giunchiglia
Fausto Giunchiglia
Professor of Computer Science, Università di Trento
Computational theories of the mind
L
Lan Huang
Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University
X
Xiaoyue Feng
Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University
R
Renchu Guan
Key Laboratory of Symbolic Computation and Knowledge Engineering of the Ministry of Education, College of Computer Science and Technology, Jilin University