Reasoning Pattern Alignment Merging for Adaptive Reasoning

📅 2026-01-07
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the high computational cost and latency of large reasoning models caused by excessively long reasoning chains. Existing acceleration methods often require costly retraining or are highly sensitive to prompt variations. To overcome these limitations, the authors propose a lightweight, training-free hierarchical fusion framework that adaptively combines deep-chain-of-thought reasoning models with shallow ones layer by layer. The approach introduces a novel fusion mechanism grounded in reasoning-mode alignment, guided by a small annotated calibration set, and enhanced with a contrastive learning objective to improve mode discriminability. Evaluated across seven mainstream reasoning benchmarks, the method substantially reduces inference cost while maintaining strong performance, demonstrating both effectiveness and broad applicability.

Technology Category

Application Category

📝 Abstract
Recent large reasoning models (LRMs) have made substantial progress in complex reasoning tasks, yet they often generate lengthy reasoning paths for every query, incurring unnecessary computation and latency. Existing speed-up approaches typically rely on retraining the model or designing sophisticated prompting, which are either prohibitively expensive or highly sensitive to the input and prompt formulation. In this work, we study model merging as a lightweight alternative for efficient reasoning: by combining a long chain-of-thought (Long-CoT) reasoning model with a Short-CoT instruction model, we obtain an adaptive reasoner without training from scratch or requiring large-scale additional data. Building on this idea, we propose Reasoning Pattern Alignment Merging (RPAM), a layer-wise model merging framework based on feature alignment to facilitate query-adaptive reasoning. RPAM first constructs a small pattern-labeled calibration set that assigns each query an appropriate reasoning pattern. It then optimizes layer-wise merging coefficients by aligning the merged model's intermediate representations with those of the selected model, while a contrastive objective explicitly pushes them away from the non-selected model. Experiments on seven widely used reasoning benchmarks show that RPAM substantially reduces inference cost while maintaining strong performance. Upon article acceptance, we will provide open-source code to reproduce experiments for RPAM.
Problem

Research questions and friction points this paper is trying to address.

reasoning efficiency
model merging
inference latency
chain-of-thought
adaptive reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

model merging
adaptive reasoning
reasoning pattern alignment
chain-of-thought
feature alignment
🔎 Similar Papers
No similar papers found.