PiMoE: Token-Level Routing for Integrating High-Precision Computation and Reasoning

📅 2025-09-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models (LLMs) lack native support for high-precision numerical computation, while multi-agent approaches rely on external API calls—introducing substantial communication overhead, low efficiency, and poor scalability. To address this, we propose PiMoE: a physics-isolated Mixture-of-Experts architecture that enables token-level dynamic routing, deeply embedding computational modules within the model and supporting interleaved reasoning–computation execution in a single inference chain. PiMoE adopts a decoupled training paradigm with end-to-end inference, jointly optimizing textual understanding and numerical computation capabilities. Experiments on two reasoning–computation benchmarks demonstrate that PiMoE outperforms fine-tuned LLMs in accuracy and significantly reduces response latency (−42%), token consumption (−38%), and GPU energy usage (−35%) versus state-of-the-art multi-agent systems. The architecture achieves superior efficiency, interpretability, and scalability.

Technology Category

Application Category

📝 Abstract
Complex systems typically rely on high-precision numerical computation to support decisions, but current large language models (LLMs) cannot yet incorporate such computations as an intrinsic and interpretable capability with existing architectures. Mainstream multi-agent approaches can leverage external experts, but inevitably introduce communication overhead and suffer from inefficient multimodal emergent capability and limited scalability. To this end, we propose PiMoE (Physically-isolated Mixture of Experts), a training and inference architecture for integrating computation and reasoning. Instead of the workflow paradigm of tool invocation, PiMoE endogenously integrates computational capabilities into neural networks after separately training experts, a text-to-computation module, and a router. At inference, the router directs computation and reasoning at the token level, thereby enabling iterative alternation within a single chain of thought. We evaluate PiMoE on two reasoning-computation tasks against LLM finetuning and the multi-agent system approaches. Results show that the PiMoE architecture achieves not only higher accuracy than directly finetuning LLMs but also significant improvements in response latency, token usage, and GPU energy consumption compared with mainstream multi-agent approaches. PiMoE offers an efficient, interpretable, and scalable paradigm for next-generation scientific or industrial intelligent systems.
Problem

Research questions and friction points this paper is trying to address.

Integrating high-precision computation into neural networks interpretably
Overcoming communication overhead in multi-agent expert systems
Enabling token-level routing between computation and reasoning tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level routing for computation-reasoning integration
Physically-isolated mixture of experts architecture
Endogenous computational capability without tool invocation
🔎 Similar Papers
No similar papers found.
H
Hengbo Xiao
Peking University
J
Jingyuan Fan
Peking University
X
Xin Tong
Beihang University
J
Jingzhao Zhang
Tsinghua University
C
Chao Lu
Tsinghua University
Guannan He
Guannan He
Peking University
Energy SystemMobilityEnergy StorageOptimization