🤖 AI Summary
This work addresses the limitations of traditional chain-of-thought (CoT) compression methods, which overly rely on superficial linguistic features while neglecting underlying reasoning structures, thereby constraining both efficiency and abstraction capability. The authors propose a novel approach that compresses CoT into an image-form representation (ImgCoT), leveraging visual-spatial inductive biases to guide latent tokens in capturing global reasoning structures. A loose hybrid mechanism is introduced to preserve critical textual reasoning steps. Built upon an autoencoder architecture, the method integrates image rendering, visual token modeling, and a low-likelihood token selection strategy. Extensive experiments across multiple datasets and large language models demonstrate that this approach significantly enhances reasoning efficiency while maintaining or even improving reasoning performance.
📝 Abstract
Compressing long chains of thought (CoT) into compact latent tokens is crucial for efficient reasoning with large language models (LLMs). Recent studies employ autoencoders to achieve this by reconstructing textual CoT from latent tokens, thus encoding CoT semantics. However, treating textual CoT as the reconstruction target forces latent tokens to preserve surface-level linguistic features (e.g., word choice and syntax), introducing a strong linguistic inductive bias that prioritizes linguistic form over reasoning structure and limits logical abstraction. Thus, we propose ImgCoT that replaces the reconstruction target from textual CoT to the visual CoT obtained by rendering CoT into images. This substitutes linguistic bias with spatial inductive bias, i.e., a tendency to model spatial layouts of the reasoning steps in visual CoT, enabling latent tokens to better capture global reasoning structure. Moreover, although visual latent tokens encode abstract reasoning structure, they may blur reasoning details. We thus propose a loose ImgCoT, a hybrid reasoning that augments visual latent tokens with a few key textual reasoning steps, selected based on low token log-likelihood. This design allows LLMs to retain both global reasoning structure and fine-grained reasoning details with fewer tokens than the complete CoT. Extensive experiments across multiple datasets and LLMs demonstrate the effectiveness of the two versions of ImgCoT.