Enhancing LLM Character-Level Manipulation via Divide and Conquer

📅 2025-02-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit poor performance on character-level string operations—such as deletion, insertion, and substitution—due to tokenization-induced limitations in fine-grained character reasoning. To address this, we propose a zero-shot, “divide-and-conquer” prompting framework that decomposes each operation into explicit, atomic character-level subtasks and guides controllable token reconstruction via structured intermediate representations. This approach is the first to explicitly reveal and enhance LLMs’ awareness of intra-token character structure. By bridging the gap between token-level processing and character-level manipulation, our method achieves substantial accuracy improvements across all three core tasks—Deletion, Insertion, and Substitution. Furthermore, we publicly release a standardized benchmark and open-source implementation, establishing a new, reproducible paradigm for studying fine-grained textual manipulation in LLMs.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated strong generalization capabilities across a wide range of natural language processing (NLP) tasks. However, they exhibit notable weaknesses in character-level string manipulation, struggling with fundamental operations such as character deletion, insertion, and substitution. These challenges stem primarily from tokenization constraints, despite the critical role of such operations in data preprocessing and code generation. Through systematic analysis, we derive two key insights: (1) LLMs face significant difficulties in leveraging intrinsic token knowledge for character-level reasoning, and (2) atomized word structures can substantially enhance LLMs' ability to process token-level structural information. Building on these insights, we propose Character-Level Manipulation via Divide and Conquer, a novel approach designed to bridge the gap between token-level processing and character-level manipulation. Our method decomposes complex operations into explicit character-level subtasks coupled with controlled token reconstruction phases, leading to significant improvements in accuracy. Without additional training, our method significantly improves accuracies on the $ exttt{Deletion}$, $ exttt{Insertion}$, and $ exttt{Substitution}$ tasks. To support further research, we open-source our implementation and benchmarks.
Problem

Research questions and friction points this paper is trying to address.

Improving character-level manipulation in LLMs
Addressing tokenization limitations in NLP
Enhancing accuracy in string operations without training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Divide and Conquer strategy
Character-level subtasks decomposition
Controlled token reconstruction
🔎 Similar Papers
No similar papers found.