Recall-Extend Dynamics: Enhancing Small Language Models through Controlled Exploration and Refined Offline Integration

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Small language models (SLMs) suffer from limited reasoning capabilities due to narrow search spaces, redundant knowledge distillation, and misalignment between offline data distributions and target policy distributions. To address these issues, this paper proposes the Recall-Extend Dynamics framework, which jointly integrates offline supervised fine-tuning (SFT) and online reinforcement learning (RL). It introduces an entropy-change monitoring mechanism to dynamically adjust the weight of offline supervision and a sample-accuracy-based policy switching module to enable adaptive coordination between imitation learning and autonomous optimization. Additionally, the framework incorporates entropy-regularized exploration, policy transfer, and optimized insertion of distilled data. Experiments demonstrate substantial improvements in SLM performance on complex reasoning tasks, surpassing conventional knowledge distillation and pure RL baselines in both knowledge utilization efficiency and policy exploration quality.

Technology Category

Application Category

📝 Abstract
Many existing studies have achieved significant improvements in the reasoning capabilities of large language models (LLMs) through reinforcement learning with verifiable rewards (RLVR), while the enhancement of reasoning abilities in small language models (SLMs) has not yet been sufficiently explored. Combining distilled data from larger models with RLVR on small models themselves is a natural approach, but it still faces various challenges and issues. Therefore, we propose extit{underline{R}}ecall- extit{underline{E}}xtend extit{underline{D}}ynamics(RED): Enhancing Small Language Models through Controlled Exploration and Refined Offline Integration. In this paper, we explore the perspective of varying exploration spaces, balancing offline distillation with online reinforcement learning. Simultaneously, we specifically design and optimize for the insertion problem within offline data. By monitoring the ratio of entropy changes in the model concerning offline and online data, we regulate the weight of offline-SFT, thereby addressing the issues of insufficient exploration space in small models and the redundancy and complexity during the distillation process. Furthermore, to tackle the distribution discrepancies between offline data and the current policy, we design a sample-accuracy-based policy shift mechanism that dynamically chooses between imitating offline distilled data and learning from its own policy.
Problem

Research questions and friction points this paper is trying to address.

Enhancing reasoning in small language models via controlled exploration
Balancing offline distillation with online reinforcement learning
Addressing distribution discrepancies between offline data and policy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Controlled exploration balancing offline-online learning
Entropy-based regulation for offline SFT weighting
Sample-accuracy policy shift mechanism integration
🔎 Similar Papers
No similar papers found.