🤖 AI Summary
To address the classification performance bottleneck caused by semantic sparsity in short texts and scarcity of labeled data, this paper proposes MI-DELIGHT: a model that represents short texts as multi-source enhanced graphs—integrating statistical co-occurrence, linguistic structure, and knowledge facts—and employs dual-granularity contrastive learning (instance-level and cluster-level) to strengthen discriminative representation learning. Furthermore, it introduces a hierarchical task correlation architecture that explicitly models dependency relationships between primary and auxiliary tasks. Key innovations include: (i) the first multi-source information协同 graph construction mechanism, (ii) a novel dual-granularity contrastive learning paradigm, and (iii) an interpretable hierarchical task modeling framework. Extensive experiments demonstrate that MI-DELIGHT significantly outperforms state-of-the-art methods across multiple standard short-text classification benchmarks; notably, in several low-resource settings, it even surpasses mainstream large language models.
📝 Abstract
Short text classification, as a research subtopic in natural language processing, is more challenging due to its semantic sparsity and insufficient labeled samples in practical scenarios. We propose a novel model named MI-DELIGHT for short text classification in this work. Specifically, it first performs multi-source information (i.e., statistical information, linguistic information, and factual information) exploration to alleviate the sparsity issues. Then, the graph learning approach is adopted to learn the representation of short texts, which are presented in graph forms. Moreover, we introduce a dual-level (i.e., instance-level and cluster-level) contrastive learning auxiliary task to effectively capture different-grained contrastive information within massive unlabeled data. Meanwhile, previous models merely perform the main task and auxiliary tasks in parallel, without considering the relationship among tasks. Therefore, we introduce a hierarchical architecture to explicitly model the correlations between tasks. We conduct extensive experiments across various benchmark datasets, demonstrating that MI-DELIGHT significantly surpasses previous competitive models. It even outperforms popular large language models on several datasets.