π€ AI Summary
Large language models (LLMs) excel at complex reasoning but struggle to reliably self-verify the correctness of their outputs; existing approaches rely on external verifiers or multi-stage training, suffering from poor scalability. This paper proposes Policy-Augmented Generation (PAG), the first single-model, multi-turn reinforcement learning framework enabling dynamic switching between policy and verifier roles. Its core innovation is a selective revision mechanism: corrections are triggered only when the modelβs own generative verification detects an error, thereby preventing policy collapse. The framework achieves end-to-end joint optimization of reasoning and verification capabilities without requiring auxiliary verification modules. Experiments demonstrate significant improvements in both direct generation and self-correction accuracy across diverse reasoning benchmarks. Moreover, PAGβs self-verification performance surpasses self-consistency methods, achieving a superior trade-off between accuracy and computational efficiency.
π Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in complex reasoning tasks, yet they still struggle to reliably verify the correctness of their own outputs. Existing solutions to this verification challenge often depend on separate verifier models or require multi-stage self-correction training pipelines, which limit scalability. In this paper, we propose Policy as Generative Verifier (PAG), a simple and effective framework that empowers LLMs to self-correct by alternating between policy and verifier roles within a unified multi-turn reinforcement learning (RL) paradigm. Distinct from prior approaches that always generate a second attempt regardless of model confidence, PAG introduces a selective revision mechanism: the model revises its answer only when its own generative verification step detects an error. This verify-then-revise workflow not only alleviates model collapse but also jointly enhances both reasoning and verification abilities. Extensive experiments across diverse reasoning benchmarks highlight PAG's dual advancements: as a policy, it enhances direct generation and self-correction accuracy; as a verifier, its self-verification outperforms self-consistency.