PAG: Multi-Turn Reinforced LLM Self-Correction with Policy as Generative Verifier

πŸ“… 2025-06-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Large language models (LLMs) excel at complex reasoning but struggle to reliably self-verify the correctness of their outputs; existing approaches rely on external verifiers or multi-stage training, suffering from poor scalability. This paper proposes Policy-Augmented Generation (PAG), the first single-model, multi-turn reinforcement learning framework enabling dynamic switching between policy and verifier roles. Its core innovation is a selective revision mechanism: corrections are triggered only when the model’s own generative verification detects an error, thereby preventing policy collapse. The framework achieves end-to-end joint optimization of reasoning and verification capabilities without requiring auxiliary verification modules. Experiments demonstrate significant improvements in both direct generation and self-correction accuracy across diverse reasoning benchmarks. Moreover, PAG’s self-verification performance surpasses self-consistency methods, achieving a superior trade-off between accuracy and computational efficiency.

Technology Category

Application Category

πŸ“ Abstract
Large Language Models (LLMs) have demonstrated impressive capabilities in complex reasoning tasks, yet they still struggle to reliably verify the correctness of their own outputs. Existing solutions to this verification challenge often depend on separate verifier models or require multi-stage self-correction training pipelines, which limit scalability. In this paper, we propose Policy as Generative Verifier (PAG), a simple and effective framework that empowers LLMs to self-correct by alternating between policy and verifier roles within a unified multi-turn reinforcement learning (RL) paradigm. Distinct from prior approaches that always generate a second attempt regardless of model confidence, PAG introduces a selective revision mechanism: the model revises its answer only when its own generative verification step detects an error. This verify-then-revise workflow not only alleviates model collapse but also jointly enhances both reasoning and verification abilities. Extensive experiments across diverse reasoning benchmarks highlight PAG's dual advancements: as a policy, it enhances direct generation and self-correction accuracy; as a verifier, its self-verification outperforms self-consistency.
Problem

Research questions and friction points this paper is trying to address.

LLMs struggle to verify their own outputs reliably
Existing solutions lack scalability and require separate verifiers
PAG enables self-correction via multi-turn RL with selective revision
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-turn RL for self-correction and verification
Selective revision based on generative verification
Unified policy-verifier roles in LLMs
πŸ”Ž Similar Papers
No similar papers found.