🤖 AI Summary
This work proposes an efficient compressed reasoning paradigm that addresses the substantial increase in output length and computational overhead associated with Chain-of-Thought (CoT) prompting. By compressing intermediate steps of complex visual reasoning into a single latent token, the method leverages supervision signals derived from text-to-image rendering and alignment with DeepSeek-OCR hidden states. Evaluated on ProntoQA and ProsQA, the approach achieves accuracies of 99.80% and 97.80%, respectively, while reducing average output length by 11× (up to 87.4×) with only a 2.21% accuracy drop. Furthermore, it yields a 6.8× improvement in reasoning throughput (OTC), enabling highly efficient, auditable, and low-redundancy inference.
📝 Abstract
Chain-of-thought (CoT) prompting improves reasoning but often increases inference cost by one to two orders of magnitude. To address these challenges, we present \textbf{OneLatent}, a framework that compresses intermediate reasoning into a single latent token via supervision from rendered CoT images and DeepSeek-OCR hidden states. By rendering textual steps into images, we obtain a deterministic supervision signal that can be inspected and audited without requiring the model to output verbose textual rationales. Across benchmarks, OneLatent reduces average output length by $11\times$ with only a $2.21\%$ average accuracy drop relative to textual CoT, while improving output token contribution (OTC) by $6.8\times$. On long-chain logical reasoning, OneLatent reaches $99.80\%$ on ProntoQA and $97.80\%$ on ProsQA with one latent token, with compression up to $87.4\times$, supporting compression-constrained generalization.