🤖 AI Summary
When large language models exhibit comparable overall performance, their task-specific capabilities often complement one another, making efficient model routing critical to surpassing the performance ceiling of any single model. This work proposes a routing mechanism that leverages internal activations from the prefill phase, decoupling the encoder from the target model (Encoder-Target Decoupling) and employing Fisher separability (J) and effective dimensionality (d_eff) as predictive metrics to accurately forecast and select the best-performing target model. Evaluated within the SharedTrunkNet architecture, the method bridges 45.58% of the accuracy gap between the strongest individual model and the oracle upper bound, while reducing computational cost by 74.31% compared to the most expensive model.
📝 Abstract
LLMs often share comparable benchmark accuracies, but their complementary performance across task subsets suggests that an Oracle router--a theoretical selector with perfect foresight--can significantly surpass standalone model accuracy by navigating model-specific strengths. While current routers rely on fragile semantic signals, we propose using internal prefill activations via Encoder-Target Decoupling--a functional separation between the model providing the predictive signal (the Encoder) and the model whose performance is being estimated (the Target). This allows optimized heterogeneous pairing between unique encoders and target models. We utilize Fisher Separability (J) and Effective Dimensionality (d_eff) as mathematical probes to isolate optimal layer-wise signals, providing the predictive foundation for our SharedTrunkNet architecture. SharedTrunkNet captures up to 45.58% of the accuracy gap between the strongest standalone model and the Oracle while achieving 74.31% cost savings relative to the highest-cost model.