Pushing the boundary on Natural Language Inference

📅 2025-04-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address weak generalization in natural language inference (NLI) caused by annotation artifacts and biases in supervised data, this paper proposes the first unsupervised chain-of-thought (CoT) reinforcement learning framework for NLI based on Group Relative Policy Optimization (GRPO), eliminating the need for human-annotated reasoning chains. The method integrates GRPO optimization with efficient fine-tuning via LoRA/QLoRA and AWQ quantization, achieving only 22 GB memory consumption on a 32B-parameter model. Experiments on ANLI and 11 adversarial NLI benchmarks demonstrate state-of-the-art performance: the approach surpasses prior methods on 7 metrics and matches them on the rest, significantly improving robustness and deployment feasibility. The core contribution lies in the first application of GRPO to unsupervised CoT training for NLI and empirical validation of its effectiveness under resource-constrained conditions.

Technology Category

Application Category

📝 Abstract
Natural Language Inference (NLI) is a central task in natural language understanding with applications in fact-checking, question answering, and information retrieval. Despite its importance, current NLI systems heavily rely on supervised learning with datasets that often contain annotation artifacts and biases, limiting generalization and real-world applicability. In this work, we apply a reinforcement learning-based approach using Group Relative Policy Optimization (GRPO) for Chain-of-Thought (CoT) learning in NLI, eliminating the need for labeled rationales and enabling this type of training on more challenging datasets such as ANLI. We fine-tune 7B, 14B, and 32B language models using parameter-efficient techniques (LoRA and QLoRA), demonstrating strong performance across standard and adversarial NLI benchmarks. Our 32B AWQ-quantized model surpasses state-of-the-art results on 7 out of 11 adversarial sets$unicode{x2013}$or on all of them considering our replication$unicode{x2013}$within a 22GB memory footprint, showing that robust reasoning can be retained under aggressive quantization. This work provides a scalable and practical framework for building robust NLI systems without sacrificing inference quality.
Problem

Research questions and friction points this paper is trying to address.

Improving Natural Language Inference generalization without labeled rationales
Overcoming dataset biases in NLI using reinforcement learning
Achieving robust NLI performance under aggressive model quantization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reinforcement learning with GRPO for CoT
Parameter-efficient fine-tuning via LoRA/QLoRA
AWQ-quantized models retain robust reasoning
🔎 Similar Papers
No similar papers found.
P
Pablo Miralles-Gonzalez
Department of Computer Systems, Technical University of Madrid
Javier Huertas-Tato
Javier Huertas-Tato
Universidad Politécnica de Madrid
Machine LearningNeural NetworksEvolutionary Computation
A
Alejandro Martin
Department of Computer Systems, Technical University of Madrid
David Camacho
David Camacho
Universidad Politécnica de Madrid
Machine LearningSocial Network AnalysisEvolutionary ComputationDisinformation