🤖 AI Summary
This work identifies a pervasive instruction-following degradation problem in large reasoning models during mathematical reasoning enhancement. To address this, we introduce MathIF—the first benchmark explicitly designed to evaluate instruction-following capability in mathematical reasoning—revealing a significant negative correlation between reasoning performance and instruction controllability (instruction adherence drops by up to 47% as generation length increases). Methodologically, we design multi-dimensional instruction-control test cases, integrate controllable generation modeling with behavioral consistency metrics, and conduct causal analysis via chain-of-thought distillation and reasoning-oriented reinforcement learning. We further propose a lightweight intervention strategy that recovers, on average, 22% instruction accuracy while sacrificing only 11% reasoning performance. Our contributions include the open-sourced MathIF benchmark and foundational insights—both theoretical and practical—for balancing reasoning capability with instruction controllability.
📝 Abstract
Instruction-following is essential for aligning large language models (LLMs) with user intent. While recent reasoning-oriented models exhibit impressive performance on complex mathematical problems, their ability to adhere to natural language instructions remains underexplored. In this work, we introduce MathIF, a dedicated benchmark for evaluating instruction-following in mathematical reasoning tasks. Our empirical analysis reveals a consistent tension between scaling up reasoning capacity and maintaining controllability, as models that reason more effectively often struggle to comply with user directives. We find that models tuned on distilled long chains-of-thought or trained with reasoning-oriented reinforcement learning often degrade in instruction adherence, especially when generation length increases. Furthermore, we show that even simple interventions can partially recover obedience, though at the cost of reasoning performance. These findings highlight a fundamental tension in current LLM training paradigms and motivate the need for more instruction-aware reasoning models. We release the code and data at https://github.com/TingchenFu/MathIF.