Reinforced Context Order Recovery for Adaptive Reasoning and Planning

📅 2025-08-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing causal language models and discrete diffusion models rely on fixed (e.g., left-to-right) or random token generation orders, limiting their ability to align with the inherent logical structure required for complex reasoning and planning tasks. Method: We propose the first reinforcement learning–based framework for adaptive generation-order modeling. It introduces V-information to quantify order bias, employs an unsupervised RL policy to dynamically predict optimal token generation sequences without ground-truth order annotations, and integrates self-supervised token difficulty estimation with discrete diffusion to enable data-driven generation path optimization. Contribution/Results: Our method achieves significant improvements over standard baselines across multiple reasoning and planning benchmarks—even outperforming oracle models trained with ground-truth order supervision—while delivering higher generation efficiency and accuracy.

Technology Category

Application Category

📝 Abstract
Modern causal language models, followed by rapid developments in discrete diffusion models, can now produce a wide variety of interesting and useful content. However, these families of models are predominantly trained to output tokens with a fixed (left-to-right) or random order, which may deviate from the logical order in which tokens are generated originally. In this paper, we observe that current causal and diffusion models encounter difficulties in problems that require adaptive token generation orders to solve tractably, which we characterize with the $mathcal{V}$-information framework. Motivated by this, we propose Reinforced Context Order Recovery (ReCOR), a reinforcement-learning-based framework to extract adaptive, data-dependent token generation orders from text data without annotations. Self-supervised by token prediction statistics, ReCOR estimates the hardness of predicting every unfilled token and adaptively selects the next token during both training and inference. Experiments on challenging reasoning and planning datasets demonstrate the superior performance of ReCOR compared with baselines, sometimes outperforming oracle models supervised with the ground-truth order.
Problem

Research questions and friction points this paper is trying to address.

Adaptive token generation orders for reasoning
Extracting data-dependent orders without annotations
Improving performance on challenging planning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning framework for adaptive token generation
Self-supervised token prediction statistics for hardness estimation
Data-dependent token order selection without annotations