LANPO: Bootstrapping Language and Numerical Feedback for Reinforcement Learning in LLMs

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM reinforcement learning methods rely solely on scalar rewards, discarding linguistically rich reasoning principles—such as step-by-step inference and error attribution—embedded in rollouts, resulting in low sample efficiency; directly incorporating online textual feedback risks information leakage (within-task feedback) or behavioral collapse (cross-task feedback). Method: We propose a decoupled optimization framework that separates linguistic and numerical feedback: linguistic feedback guides exploration and reflection, while scalar rewards drive policy optimization. We design a reward-agnostic reflection mechanism and a context-aware abstraction module, integrated with a dynamic experience buffer and online self-correction, all unified within the GRPO framework. Contribution/Results: On mathematical reasoning benchmarks, our approach significantly outperforms strong baselines—including GRPO—using 7B and 14B models, achieving substantial gains in accuracy. It demonstrates improved sample efficiency and enhanced generalization in complex reasoning tasks.

Technology Category

Application Category

📝 Abstract
Reinforcement learning in large language models (LLMs) often relies on scalar rewards, a practice that discards valuable textual rationale buried in the rollouts, forcing the model to explore extit{de novo} with each attempt and hindering sample efficiency. While LLMs can uniquely learn from language feedback provided in-context, naively integrating on-line experiences into RL training presents a paradox: feedback from the same problem risks information leakage and memorization, while feedback from different problems often leads to behavior collapse due to irrelevant context. To resolve this tension, we propose extbf{Language-And-Numerical Policy Optimization (LANPO)}, a framework that cleanly separates the roles of feedback: language guides exploration, while numerical rewards drive optimization. LANPO builds a dynamic experience pool from past trials and introduces two principles to ensure feedback is effective: emph{Reward-Agnostic Reflection} for safe intra-sample self-correction and emph{Relevant Abstraction} to distill generalizable lessons from inter-sample experiences. Across mathematical reasoning benchmarks, LANPO enables 7B and 14B models to significantly outperform strong baselines trained with GRPO in test accuracy. Our work provides a robust method for integrating historical experiences into the LLM RL loop, creating more effective and data-efficient learning agents.
Problem

Research questions and friction points this paper is trying to address.

Scalar rewards discard valuable textual rationale in RL
Language feedback integration risks information leakage or collapse
Separating language guidance from numerical rewards improves learning efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Separates language feedback from numerical rewards
Builds dynamic experience pool from past trials
Uses reflection and abstraction for feedback distillation