🤖 AI Summary
Large language models (LLMs) exhibit poor performance on character-level string operations—such as deletion, insertion, and substitution—due to tokenization-induced limitations in fine-grained character reasoning. To address this, we propose a zero-shot, “divide-and-conquer” prompting framework that decomposes each operation into explicit, atomic character-level subtasks and guides controllable token reconstruction via structured intermediate representations. This approach is the first to explicitly reveal and enhance LLMs’ awareness of intra-token character structure. By bridging the gap between token-level processing and character-level manipulation, our method achieves substantial accuracy improvements across all three core tasks—Deletion, Insertion, and Substitution. Furthermore, we publicly release a standardized benchmark and open-source implementation, establishing a new, reproducible paradigm for studying fine-grained textual manipulation in LLMs.
📝 Abstract
Large Language Models (LLMs) have demonstrated strong generalization capabilities across a wide range of natural language processing (NLP) tasks. However, they exhibit notable weaknesses in character-level string manipulation, struggling with fundamental operations such as character deletion, insertion, and substitution. These challenges stem primarily from tokenization constraints, despite the critical role of such operations in data preprocessing and code generation. Through systematic analysis, we derive two key insights: (1) LLMs face significant difficulties in leveraging intrinsic token knowledge for character-level reasoning, and (2) atomized word structures can substantially enhance LLMs' ability to process token-level structural information. Building on these insights, we propose Character-Level Manipulation via Divide and Conquer, a novel approach designed to bridge the gap between token-level processing and character-level manipulation. Our method decomposes complex operations into explicit character-level subtasks coupled with controlled token reconstruction phases, leading to significant improvements in accuracy. Without additional training, our method significantly improves accuracies on the $ exttt{Deletion}$, $ exttt{Insertion}$, and $ exttt{Substitution}$ tasks. To support further research, we open-source our implementation and benchmarks.