🤖 AI Summary
To address the limited cross-domain generalization of monolithic large language models (LLMs) on diverse reasoning benchmarks (e.g., GSM8K, MATH, HumanEval), this paper proposes a unified fusion framework for constructing high-performance hub models. The method integrates multi-step knowledge distillation, weight merging, and unified output aggregation. Its core contributions are: (1) Rate-Skewness Adaptive Fusion (RSAF), a novel dynamic parameter fusion strategy that adaptively modulates expert model contributions per domain based on task-specific rate and skewness characteristics; and (2) an uncertainty-aware logits-weighted ensemble mechanism that enhances output stability and cross-domain generalization. Experiments demonstrate substantial improvements over strong baselines: +9.27% accuracy on GSM8K, +8.80% on MATH, and +8.89% on HumanEval—outperforming conventional ensemble and distillation approaches by a significant margin.
📝 Abstract
Large Language Models (LLMs) have demonstrated strong performance across various reasoning tasks, yet building a single model that consistently excels across all domains remains challenging. This paper addresses this problem by exploring strategies to integrate multiple domain-specialized models into an efficient pivot model.We propose two fusion strategies to combine the strengths of multiple LLMs: (1) a pairwise, multi-step fusion approach that sequentially distills each source model into the pivot model, followed by a weight merging step to integrate the distilled models into the final model. This method achieves strong performance but requires substantial training effort; and (2) a unified fusion approach that aggregates all source models' outputs simultaneously.To improve the fusion process, we introduce a novel Rate-Skewness Adaptive Fusion (RSAF) technique, which dynamically adjusts top-K ratios during parameter merging for enhanced flexibility and stability.Furthermore, we propose an uncertainty-based weighting method for the unified approach, which dynamically balances the contributions of source models and outperforms other logits/distribution ensemble methods.We achieved accuracy improvements of 9.27%, 8.80%, and 8.89% on the GSM8K, MATH, and HumanEval tasks, respectively.