Cool-Fusion: Fuse Large Language Models without Training

📅 2024-07-29
🏛️ arXiv.org
📈 Citations: 4
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the challenge of knowledge fusion across heterogeneous large language models (LLMs) without requiring training, fine-tuning, or vocabulary alignment. We propose the first zero-shot fusion method, built upon three key components: (1) distributed token generation synchronized at word boundaries, enabling collaborative decoding across models with disparate vocabularies; (2) a cross-model joint re-ranking mechanism that dynamically aggregates output confidence scores; and (3) a lightweight, training-free fusion architecture. Our approach departs from conventional ensemble and alignment paradigms, directly leveraging complementary strengths of diverse LLMs. Evaluated on the GSM8K mathematical reasoning benchmark, it achieves an average accuracy gain of 12.9% (ranging from +8.0% to +17.8%) over three strong baseline LLMs. The method significantly enhances both model complementarity and inference robustness, demonstrating effective zero-shot integration of heterogeneous LLMs.

Technology Category

Application Category

📝 Abstract
We focus on the problem of fusing two or more heterogeneous large language models (LLMs) to facilitate their complementary strengths. One of the challenges on model fusion is high computational load, i.e. to fine-tune or to align vocabularies via combinatorial optimization. To this end, we propose emph{Cool-Fusion}, a simple yet effective approach that fuses the knowledge of heterogeneous source LLMs to leverage their complementary strengths. emph{Cool-Fusion} is the first method that does not require any type of training like the ensemble approaches. But unlike ensemble methods, it is applicable to any set of source LLMs that have different vocabularies. The basic idea is to have each source LLM individually generate tokens until the tokens can be decoded into a text segment that ends at word boundaries common to all source LLMs. Then, the source LLMs jointly rerank the generated text segment and select the best one, which is the fused text generation in one step. Extensive experiments are conducted across a variety of benchmark datasets. On emph{GSM8K}, emph{Cool-Fusion} increases accuracy from three strong source LLMs by a significant 8%-17.8%.
Problem

Research questions and friction points this paper is trying to address.

Fuse heterogeneous large language models without training
Address vocabulary discrepancies among different LLMs
Reduce computational load in model fusion process
Innovation

Methods, ideas, or system contributions that make the work stand out.

Fuses LLMs without training
Ensembles LLMs on text level
Overcomes vocabulary discrepancies via reranking
🔎 Similar Papers
No similar papers found.