Incentivizing Consistent, Effective and Scalable Reasoning Capability in Audio LLMs via Reasoning Process Rewards

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Audio large language models (Audio-LLMs) suffer from *test-time inverse scaling*: performance degrades as reasoning chains lengthen, primarily due to the absence of explicit supervision over intermediate reasoning steps, leading to hallucination and error accumulation. To address this, we propose the first training paradigm that replaces outcome-based verification with *reasoning-process reward modeling*. Leveraging online reinforcement learning via Group Relative Policy Optimization (GRPO), we design a multidimensional reward function assessing correctness, format compliance, causal logic, knowledge integration, and reasoning depth. Our method identifies and exploits model-specific “reasoning sweet spots”—regions of reasoning length and structure where fidelity and effectiveness are jointly maximized—thereby improving both consistency and scalability. On MMAU Test-mini, our approach achieves state-of-the-art performance, significantly outperforming Gemini 2.5 Pro and GPT-4o Audio; on MMSU, it approaches human-level performance while concurrently enhancing multimodal reasoning and perception capabilities.

Technology Category

Application Category

📝 Abstract
The role of reasoning in Audio Large Language Models remains widely underexplored, as introducing a reasoning process often degrades rather than improves performance during inference, a phenomenon we term test-time inverse scaling, where longer reasoning chains yield progressively worse results. We demonstrate that this stems not from fundamental limitations of reasoning itself, but from inadequate training: models without proper guidance for the reasoning process produce hallucinatory, inconsistent reasoning that accumulates errors over longer chains. To address these challenges, we introduce CESAR (Consistent, Effective, and Scalable Audio Reasoners), shifting from outcome verification to rewarding the reasoning process. Our online reinforcement learning framework employs Group Relative Policy Optimization with a multi-faceted reward suite that incentivizes not only correctness and format but also consistency, structured analytical patterns, causal reasoning, domain-knowledge integration, and calibrated reasoning depth. CESAR resolves test-time inverse scaling, transforming reasoning from detriments into gains while revealing model-specific ``reasoning sweet spots", where performance peaks during test-time scaling. We achieve state-of-the-art results on MMAU Test-mini, substantially outperforming Gemini 2.5 Pro and GPT-4o Audio, and near-human-level performance on MMSU reasoning tasks. Through AI-as-judge evaluations and qualitative comparisons, we provide both quantitative and qualitative validation of our improved reasoning quality. Importantly, enhanced reasoning creates synergistic effects, simultaneously improving multimodal reasoning and perception capabilities. Overall, CESAR establishes a principled method for developing robust and scalable reasoning in Audio LLMs.
Problem

Research questions and friction points this paper is trying to address.

Addressing test-time inverse scaling where longer reasoning chains degrade Audio LLM performance
Overcoming inadequate training that produces hallucinatory and inconsistent reasoning processes
Developing methods to incentivize consistent, effective and scalable reasoning in Audio LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Rewards reasoning process via reinforcement learning
Uses multi-faceted reward suite for analytical patterns
Achieves state-of-the-art results on audio reasoning tasks
🔎 Similar Papers
No similar papers found.