🤖 AI Summary
This work addresses the limited generalization of routing strategies in traditional Mixture-of-Experts (MoE) architectures under distribution shifts. The authors propose kNN-MoE, the first approach to integrate case-based analogical reasoning into MoE routing by retrieving historical optimal expert assignments via k-nearest neighbors. The method combines similarity-weighted fusion of a frozen router with retrieved results and employs a confidence-driven hybrid strategy for adaptive expert selection. Notably, kNN-MoE requires no fine-tuning and achieves substantial improvements over existing baselines in zero-shot settings, matching the performance of costly supervised fine-tuning approaches.
📝 Abstract
Mixture-of-Experts (MoE) architectures scale large language models efficiently by employing a parametric"router"to dispatch tokens to a sparse subset of experts. Typically, this router is trained once and then frozen, rendering routing decisions brittle under distribution shifts. We address this limitation by introducing kNN-MoE, a retrieval-augmented routing framework that reuses optimal expert assignments from a memory of similar past cases. This memory is constructed offline by directly optimizing token-wise routing logits to maximize the likelihood on a reference set. Crucially, we use the aggregate similarity of retrieved neighbors as a confidence-driven mixing coefficient, thus allowing the method to fall back to the frozen router when no relevant cases are found. Experiments show kNN-MoE outperforms zero-shot baselines and rivals computationally expensive supervised fine-tuning.