🤖 AI Summary
Existing LLM inference simulators fail to accurately model the heterogeneous scaling, cross-cluster expert routing, and complex pipeline dynamics introduced by Mixture-of-Experts (MoE) and decoupled architectures (e.g., PD/AF separation).
Method: We present the first native system-level simulator supporting MoE and architectural decoupling with high fidelity. It incorporates fine-grained operator modeling, expert parallelism, PD/AF decoupled execution simulation, cross-cluster communication modeling, and joint scheduling policies to enable end-to-end, accurate simulation of large-scale inference workflows.
Contribution/Results: Unlike conventional simulators, our framework overcomes architectural rigidity, enabling multi-granularity performance prediction and system optimization. It achieves sub-8% error on representative MoE and decoupled models, providing a scalable, verifiable simulation infrastructure for next-generation LLM inference system design.
📝 Abstract
Large Language Model (LLM) inference is growing increasingly complex with the rise of Mixture-of-Experts (MoE) models and disaggregated architectures that decouple components like prefill/decode (PD) or attention/FFN (AF) for heterogeneous scaling. Existing simulators, architected for co-located, dense models, are unable to capture the intricate system dynamics of these emerging paradigms. We present Frontier, a high-fidelity simulator designed from the ground up for this new landscape. Frontier introduces a unified framework to model both co-located and disaggregated systems, providing native support for MoE inference with expert parallelism (EP). It enables the simulation of complex workflows like cross-cluster expert routing and advanced pipelining strategies for latency hiding. To ensure fidelity and usability, Frontier incorporates refined operator models for improved accuracy. Frontier empowers the community to design and optimize the future of LLM inference at scale.