Model Merging in the Era of Large Language Models: Methods, Applications, and Future Directions

📅 2026-03-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of efficiently combining the capabilities of multiple fine-tuned large language models without resorting to costly retraining or computationally expensive ensembles. The authors propose FUSE, a four-dimensional taxonomy that systematically unifies the theoretical foundations of model merging—such as loss landscape geometry and mode connectivity—with practical strategies including weight averaging, task vector arithmetic, sparsity-enhanced methods, mixture-of-experts architectures, and evolutionary optimization. By establishing a comprehensive, training-free framework for model merging, this study organizes existing open-source tools and evaluation benchmarks, while also identifying critical theoretical gaps, scalability challenges, and the need for standardization. The resulting knowledge体系 offers a structured roadmap to guide future research in efficient and effective model fusion.

Technology Category

Application Category

📝 Abstract
Model merging has emerged as a transformative paradigm for combining the capabilities of multiple neural networks into a single unified model without additional training. With the rapid proliferation of fine-tuned large language models~(LLMs), merging techniques offer a computationally efficient alternative to ensembles and full retraining, enabling practitioners to compose specialized capabilities at minimal cost. This survey presents a comprehensive and structured examination of model merging in the LLM era through the \textbf{FUSE} taxonomy, a four-dimensional framework organized along \textbf{F}oundations, \textbf{U}nification Strategies, \textbf{S}cenarios, and \textbf{E}cosystem. We first establish the theoretical underpinnings of merging, including loss landscape geometry, mode connectivity, and the linear mode connectivity hypothesis. We then systematically review the algorithmic landscape, spanning weight averaging, task vector arithmetic, sparsification-enhanced methods, mixture-of-experts architectures, and evolutionary optimization approaches. For each method family, we analyze the core formulation, highlight representative works, and discuss practical trade-offs. We further examine downstream applications across multi-task learning, safety alignment, domain specialization, multilingual transfer, and federated learning. Finally, we survey the supporting ecosystem of open-source tools, community platforms, and evaluation benchmarks, and identify key open challenges including theoretical gaps, scalability barriers, and standardization needs. This survey aims to equip researchers and practitioners with a structured foundation for advancing model merging.
Problem

Research questions and friction points this paper is trying to address.

model merging
large language models
unified model
computational efficiency
fine-tuned models
Innovation

Methods, ideas, or system contributions that make the work stand out.

model merging
large language models
FUSE taxonomy
task vector arithmetic
mode connectivity
🔎 Similar Papers
No similar papers found.
Mingyang Song
Mingyang Song
Tencent Inc.
NLPIRLLMs
M
Mao Zheng
Large Language Model Department, Tencent, China