A Large Language Model Guided Topic Refinement Mechanism for Short Text Modeling

📅 2024-03-26
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
Short text topic modeling suffers from low topic coherence and coarse semantic granularity due to word frequency sparsity. To address this, we propose a model-agnostic topic refinement mechanism that pioneers the integration of large language models (LLMs) into post-hoc topic processing. Leveraging structured prompt engineering, our method automatically detects semantically anomalous terms within topic word lists and generates more semantically consistent replacements—mimicking human editorial review. It tightly couples LLMs’ deep semantic understanding with outputs from conventional topic models (e.g., LDA, BERTopic), enabling fine-grained, word-level identification and optimization of semantically intrusive terms. Evaluated on four short-text datasets, our approach improves normalized pointwise mutual information (NPMI) by 12.7% on average and boosts downstream text classification accuracy by 3.9%, significantly enhancing both topic interpretability and practical utility.

Technology Category

Application Category

📝 Abstract
Modeling topics effectively in short texts, such as tweets and news snippets, is crucial to capturing rapidly evolving social trends. Existing topic models often struggle to accurately capture the underlying semantic patterns of short texts, primarily due to the sparse nature of such data. This nature of texts leads to an unavoidable lack of co-occurrence information, which hinders the coherence and granularity of mined topics. This paper introduces a novel model-agnostic mechanism, termed Topic Refinement, which leverages the advanced text comprehension capabilities of Large Language Models (LLMs) for short-text topic modeling. Unlike traditional methods, this post-processing mechanism enhances the quality of topics extracted by various topic modeling methods through prompt engineering. We guide LLMs in identifying semantically intruder words within the extracted topics and suggesting coherent alternatives to replace these words. This process mimics human-like identification, evaluation, and refinement of the extracted topics. Extensive experiments on four diverse datasets demonstrate that Topic Refinement boosts the topic quality and improves the performance in topic-related text classification tasks.
Problem

Research questions and friction points this paper is trying to address.

Enhances topic modeling in short texts
Improves semantic coherence of mined topics
Leverages LLMs for topic refinement
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided topic refinement mechanism
Enhances topic quality via prompt engineering
Identifies and replaces semantically intruder words
🔎 Similar Papers
No similar papers found.
Shuyu Chang
Shuyu Chang
Nanjing University of Posts and Telecommunications
AI and SecurityText Mining
R
Rui Wang
School of Computer Science, Nanjing University of Posts and Telecommunications
P
Peng Ren
School of Computer Science, Nanjing University of Posts and Telecommunications
H
Haiping Huang
School of Computer Science, Nanjing University of Posts and Telecommunications