🤖 AI Summary
Current large language models (LLMs) lack native support for high-precision numerical computation, while multi-agent approaches rely on external API calls—introducing substantial communication overhead, low efficiency, and poor scalability. To address this, we propose PiMoE: a physics-isolated Mixture-of-Experts architecture that enables token-level dynamic routing, deeply embedding computational modules within the model and supporting interleaved reasoning–computation execution in a single inference chain. PiMoE adopts a decoupled training paradigm with end-to-end inference, jointly optimizing textual understanding and numerical computation capabilities. Experiments on two reasoning–computation benchmarks demonstrate that PiMoE outperforms fine-tuned LLMs in accuracy and significantly reduces response latency (−42%), token consumption (−38%), and GPU energy usage (−35%) versus state-of-the-art multi-agent systems. The architecture achieves superior efficiency, interpretability, and scalability.
📝 Abstract
Complex systems typically rely on high-precision numerical computation to support decisions, but current large language models (LLMs) cannot yet incorporate such computations as an intrinsic and interpretable capability with existing architectures. Mainstream multi-agent approaches can leverage external experts, but inevitably introduce communication overhead and suffer from inefficient multimodal emergent capability and limited scalability. To this end, we propose PiMoE (Physically-isolated Mixture of Experts), a training and inference architecture for integrating computation and reasoning. Instead of the workflow paradigm of tool invocation, PiMoE endogenously integrates computational capabilities into neural networks after separately training experts, a text-to-computation module, and a router. At inference, the router directs computation and reasoning at the token level, thereby enabling iterative alternation within a single chain of thought. We evaluate PiMoE on two reasoning-computation tasks against LLM finetuning and the multi-agent system approaches. Results show that the PiMoE architecture achieves not only higher accuracy than directly finetuning LLMs but also significant improvements in response latency, token usage, and GPU energy consumption compared with mainstream multi-agent approaches. PiMoE offers an efficient, interpretable, and scalable paradigm for next-generation scientific or industrial intelligent systems.