LLM Router: Prefill is All You Need

📅 2026-03-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
When large language models exhibit comparable overall performance, their task-specific capabilities often complement one another, making efficient model routing critical to surpassing the performance ceiling of any single model. This work proposes a routing mechanism that leverages internal activations from the prefill phase, decoupling the encoder from the target model (Encoder-Target Decoupling) and employing Fisher separability (J) and effective dimensionality (d_eff) as predictive metrics to accurately forecast and select the best-performing target model. Evaluated within the SharedTrunkNet architecture, the method bridges 45.58% of the accuracy gap between the strongest individual model and the oracle upper bound, while reducing computational cost by 74.31% compared to the most expensive model.

Technology Category

Application Category

📝 Abstract
LLMs often share comparable benchmark accuracies, but their complementary performance across task subsets suggests that an Oracle router--a theoretical selector with perfect foresight--can significantly surpass standalone model accuracy by navigating model-specific strengths. While current routers rely on fragile semantic signals, we propose using internal prefill activations via Encoder-Target Decoupling--a functional separation between the model providing the predictive signal (the Encoder) and the model whose performance is being estimated (the Target). This allows optimized heterogeneous pairing between unique encoders and target models. We utilize Fisher Separability (J) and Effective Dimensionality (d_eff) as mathematical probes to isolate optimal layer-wise signals, providing the predictive foundation for our SharedTrunkNet architecture. SharedTrunkNet captures up to 45.58% of the accuracy gap between the strongest standalone model and the Oracle while achieving 74.31% cost savings relative to the highest-cost model.
Problem

Research questions and friction points this paper is trying to address.

LLM routing
model selection
complementary performance
Oracle router
prefill activations
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM Router
Prefill Activations
Encoder-Target Decoupling
Fisher Separability
SharedTrunkNet
🔎 Similar Papers
No similar papers found.
T
Tanay Varshney
NVIDIA
A
Annie Surla
NVIDIA
M
Michelle Xu
NVIDIA
G
Gomathy Venkata Krishnan
NVIDIA
M
Maximilian Jeblick
NVIDIA
David Austin
David Austin
Deakin University
Psychology
N
Neal Vaidya
NVIDIA
D
Davide Onofrio
NVIDIA