ProFuser: Progressive Fusion of Large Language Models

📅 2024-08-09
🏛️ arXiv.org
📈 Citations: 3
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-large language model (LLM) fusion approaches inadequately identify model strengths, relying solely on training loss for advantage assessment. Method: This paper proposes ProFuser—a bimodal collaborative evaluation and progressive fusion framework. It introduces inference outputs as a novel modality for strength assessment, jointly modeling training loss and inference response consistency. ProFuser incorporates cross-entropy supervision and a progressive weight-scheduling strategy to enable joint optimization of heterogeneous open-source LLMs—including Vicuna, Llama-2, and MPT. Contribution/Results: Compared to conventional unimodal fusion paradigms, ProFuser achieves statistically significant improvements across three core dimensions: knowledge coverage, logical reasoning capability, and content safety. Experimental results validate both the effectiveness and generalizability of the proposed bimodal evaluation and progressive fusion mechanism.

Technology Category

Application Category

📝 Abstract
While fusing the capacities and advantages of various large language models (LLMs) offers a pathway to construct more powerful and versatile models, a fundamental challenge is to properly select advantageous model during the training. Existing fusion methods primarily focus on the training mode that uses cross entropy on ground truth in a teacher-forcing setup to measure a model's advantage, which may provide limited insight towards model advantage. In this paper, we introduce a novel approach that enhances the fusion process by incorporating both the training and inference modes. Our method evaluates model advantage not only through cross entropy during training but also by considering inference outputs, providing a more comprehensive assessment. To combine the two modes effectively, we introduce ProFuser to progressively transition from inference mode to training mode. To validate ProFuser's effectiveness, we fused three models, including vicuna-7b-v1.5, Llama-2-7b-chat, and mpt-7b-8k-chat, and demonstrated the improved performance in knowledge, reasoning, and safety compared to baseline methods.
Problem

Research questions and friction points this paper is trying to address.

Selecting optimal models during LLM fusion training
Evaluating model advantages beyond teacher-forcing setups
Combining training and inference modes for fusion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Progressive fusion of training and inference modes
Evaluates model advantage using cross entropy and outputs
Combines multiple LLMs for enhanced performance
🔎 Similar Papers
No similar papers found.
Tianyuan Shi
Tianyuan Shi
Sun Yat-sen University
NLP
Fanqi Wan
Fanqi Wan
Sun Yat-sen University
NLPLLMs
C
Canbin Huang
School of Computer Science and Engineering, Sun Yat-sen University
Xiaojun Quan
Xiaojun Quan
Professor, School of Computer Science and Engineering, Sun Yat-sen University
natural language processingtext miningmachine learning
C
Chenliang Li
Alibaba Group
M
Mingshi Yan
Alibaba Group
J
Ji Zhang
Alibaba Group