🤖 AI Summary
To address low training efficiency, poor scalability, and weak cross-domain knowledge transfer in resource-constrained settings, this paper proposes MoFE—a novel architecture that integrates parameter-efficient fine-tuning (PEFT) with the Mixture of Experts (MoE) paradigm. MoFE is the first to freeze *all* feed-forward network (FFN) layers within the MoE structure, thereby preserving expert specialization while drastically reducing trainable parameters. Experimental results demonstrate that MoFE reduces trainable parameters by over 70% compared to full fine-tuning, significantly improving training efficiency. Although its performance is marginally lower than full fine-tuning, MoFE consistently outperforms mainstream PEFT methods—including LoRA and Adapter—across diverse domains. Moreover, it exhibits strong generalization capability and practical deployment value in multi-domain scenarios, offering an effective trade-off between efficiency, capacity, and adaptability for resource-limited applications.
📝 Abstract
We propose the Mixture of Frozen Experts (MoFE) architecture, which integrates Parameter-efficient Fine-tuning (PEFT) and the Mixture of Experts (MoE) architecture to enhance both training efficiency and model scalability. By freezing the Feed Forward Network (FFN) layers within the MoE framework, MoFE significantly reduces the number of trainable parameters, improving training efficiency while still allowing for effective knowledge transfer from the expert models. This facilitates the creation of models proficient in multiple domains. We conduct experiments to evaluate the trade-offs between performance and efficiency, compare MoFE with other PEFT methodologies, assess the impact of domain expertise in the constituent models, and determine the optimal training strategy. The results show that, although there may be some trade-offs in performance, the efficiency gains are substantial, making MoFE a reasonable solution for real-world, resource-constrained environments.