Orchestrating Heterogeneous Experts: A Scalable MoE Framework with Anisotropy-Preserving Fusion

📅 2025-11-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of cross-border e-commerce search, where linguistic diversity and fine-grained semantic discrepancies lead to inconsistent performance of a single model across multiple regions. To overcome this, the authors propose a scalable coarse-grained Mixture-of-Experts (MoE) framework that employs a query-level dynamic routing mechanism to dispatch inputs to heterogeneous open-source large language models—such as Qwen and Gemma—and integrates their outputs via an information-preserving concatenation fusion strategy. This approach avoids the distortion of the anisotropic structure of embedding manifolds commonly induced by conventional weighted averaging. Without requiring costly pretraining, the method achieves a 0.72 percentage point improvement in AUC over dense baselines of comparable size on a dataset spanning six Southeast Asian markets, while delivering a 9% higher inference throughput of 13.72 queries per second (QPS).

Technology Category

Application Category

📝 Abstract
In cross-border e-commerce, search relevance modeling faces the dual challenge of extreme linguistic diversity and fine-grained semantic nuances. Existing approaches typically rely on scaling up a single monolithic Large Language Model (LLM). However, our empirical analysis reveals that single models suffer from uneven capability distributions across regions. For example, excelling in English while underperforming in specific Southeast Asian languages. In this work, we shift the paradigm from scaling a single model to orchestrating heterogeneous experts. We propose a scalable Coarse-grained Mixture-of-Experts (MoE) framework that leverages the inherent complementarity of distinct open-source LLMs (e.g., Qwen, Gemma) without expensive pre-training. Unlike standard token-level MoE, our framework dynamically routes entire queries to specialized experts and, crucially, employs an Information-Preserving Concatenation Fusion strategy. We theoretically posit that preserving the distinct embedding manifolds of heterogeneous experts-rather than compressing them via weighted averaging-is essential for capturing complex relevance signals in a multi-model latent space. On datasets spanning six Southeast Asian markets, our MoE improves AUC by 0.72 percentage points over a dense baseline with the same active parameters. Meanwhile, the optimized pipeline achieves 13.72 queries per second (QPS), a 9% throughput improvement.
Problem

Research questions and friction points this paper is trying to address.

search relevance
linguistic diversity
semantic nuance
cross-border e-commerce
heterogeneous capability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
Heterogeneous LLMs
Anisotropy-Preserving Fusion
Query-level Routing
Cross-border E-commerce Search
🔎 Similar Papers
No similar papers found.
Y
Ye Liu
Institute of Intelligent Technology, Alibaba International Digital Commerce Group
Xu Chen
Xu Chen
Harbin Institute of Technology, Shenzhen
Video UnderstandingEmbodied AIModel Compression
W
Wuji Chen
Institute of Intelligent Technology, Alibaba International Digital Commerce Group
M
Mang Li
Institute of Intelligent Technology, Alibaba International Digital Commerce Group