🤖 AI Summary
Large language models (LLMs) often lack reliable self-verification capabilities for complex reasoning tasks, as generation and verification are typically decoupled and not jointly optimized. Method: We propose GRPO-Verif—a reinforcement learning–based algorithm that unifies generative reasoning and binary verification into a single end-to-end trainable framework. It introduces a joint loss function with a tunable hyperparameter to balance reasoning accuracy and verification confidence, leveraging self-generated feedback signals for optimization. Contribution/Results: GRPO-Verif significantly improves the model’s ability to discriminate correct from incorrect reasoning paths, achieving substantial gains in answer credibility and output stability without degrading original reasoning performance. Empirical results demonstrate consistent improvements across multiple reasoning benchmarks, establishing a novel paradigm for trustworthy, self-validating LLM inference.
📝 Abstract
The reasoning capabilities of large language models (LLMs) have been significantly improved through reinforcement learning (RL). Nevertheless, LLMs still struggle to consistently verify their own reasoning traces. This raises the research question of how to enhance the self-verification ability of LLMs and whether such an ability can further improve reasoning performance. In this work, we propose GRPO-Verif, an algorithm that jointly optimizes solution generation and self-verification within a unified loss function, with an adjustable hyperparameter controlling the weight of the verification signal. Experimental results demonstrate that our method enhances self-verification capability while maintaining comparable performance in reasoning.