InfiFusion: A Unified Framework for Enhanced Cross-Model Reasoning via LLM Fusion

📅 2025-01-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the limited cross-domain generalization of monolithic large language models (LLMs) on diverse reasoning benchmarks (e.g., GSM8K, MATH, HumanEval), this paper proposes a unified fusion framework for constructing high-performance hub models. The method integrates multi-step knowledge distillation, weight merging, and unified output aggregation. Its core contributions are: (1) Rate-Skewness Adaptive Fusion (RSAF), a novel dynamic parameter fusion strategy that adaptively modulates expert model contributions per domain based on task-specific rate and skewness characteristics; and (2) an uncertainty-aware logits-weighted ensemble mechanism that enhances output stability and cross-domain generalization. Experiments demonstrate substantial improvements over strong baselines: +9.27% accuracy on GSM8K, +8.80% on MATH, and +8.89% on HumanEval—outperforming conventional ensemble and distillation approaches by a significant margin.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have demonstrated strong performance across various reasoning tasks, yet building a single model that consistently excels across all domains remains challenging. This paper addresses this problem by exploring strategies to integrate multiple domain-specialized models into an efficient pivot model.We propose two fusion strategies to combine the strengths of multiple LLMs: (1) a pairwise, multi-step fusion approach that sequentially distills each source model into the pivot model, followed by a weight merging step to integrate the distilled models into the final model. This method achieves strong performance but requires substantial training effort; and (2) a unified fusion approach that aggregates all source models' outputs simultaneously.To improve the fusion process, we introduce a novel Rate-Skewness Adaptive Fusion (RSAF) technique, which dynamically adjusts top-K ratios during parameter merging for enhanced flexibility and stability.Furthermore, we propose an uncertainty-based weighting method for the unified approach, which dynamically balances the contributions of source models and outperforms other logits/distribution ensemble methods.We achieved accuracy improvements of 9.27%, 8.80%, and 8.89% on the GSM8K, MATH, and HumanEval tasks, respectively.
Problem

Research questions and friction points this paper is trying to address.

Multi-domain Modeling
Performance Enhancement
Accuracy Improvement
Innovation

Methods, ideas, or system contributions that make the work stand out.

InfiFusion
Multi-domain Model Integration
Uncertainty Weighting
🔎 Similar Papers
No similar papers found.
Zhaoyi Yan
Zhaoyi Yan
InfiX.ai
Large Language ModelsModel FusionKnowledge DistillationImage Processing
Zhijie Sang
Zhijie Sang
Microsoft
NLP
Y
Yiming Zhang
The Hong Kong Polytechnic University
Y
Yuhao Fu
Independent
Baoyi He
Baoyi He
Zhejiang university
Q
Qi Zhou
Harbin Institute of Technology, Shenzhen
Y
Yining Di
The Hong Kong Polytechnic University
C
Chunlin Ji
Independent
S
Shengyu Zhang
Zhejiang University
F
Fei Wu
Zhejiang University
Hongxia Yang
Hongxia Yang
Professor, HK Polytechnic University
Machine LearningGenerative AICognitive IntelligenceStatistical Modeling