🤖 AI Summary
Large language models (LLMs) struggle to improve on complex, verifiable tasks when synthetic data is unavailable and only binary feedback (correct/incorrect) is provided.
Method: This paper proposes a two-stage self-improvement framework: upon generating an incorrect answer, the model produces a self-reflection and retries the task; crucially, binary reward is applied exclusively to tokens in the reflection that predict the correct answer—requiring no human annotation or synthetic data.
Contribution/Results: This is the first reflection-driven reinforcement learning paradigm fully reliant on binary feedback. The method is architecture-agnostic and effectively alleviates data and feedback bottlenecks. Experiments demonstrate that small models (1.5B–7B parameters) outperform their ten-times-larger counterparts within the same family: equation generation accuracy improves by 34.7%, and function-call accuracy by 18.1%. Gains are robust and transferable across tasks.
📝 Abstract
We explore a method for improving the performance of large language models through self-reflection and reinforcement learning. By incentivizing the model to generate better self-reflections when it answers incorrectly, we demonstrate that a model's ability to solve complex, verifiable tasks can be enhanced even when generating synthetic data is infeasible and only binary feedback is available. Our framework operates in two stages: first, upon failing a given task, the model generates a self-reflective commentary analyzing its previous attempt; second, the model is given another attempt at the task with the self-reflection in context. If the subsequent attempt succeeds, the tokens generated during the self-reflection phase are rewarded. Our experimental results show substantial performance gains across a variety of model architectures, as high as 34.7% improvement at math equation writing and 18.1% improvement at function calling. Notably, smaller fine-tuned models (1.5 billion to 7 billion parameters) outperform models in the same family that are 10 times larger. Our novel paradigm is thus an exciting pathway to more useful and reliable language models that can self-improve on challenging tasks with limited external feedback.