π€ AI Summary
This work addresses the challenge of predicting multilingual model performance in the absence of direct evaluation data for target languages. The authors introduce a controlled benchmark comprising 1,500 questions spanning six task types and five evidence scenarios, along with Litmus (Re)Agentβa novel system that incorporates structured agent-based reasoning into performance prediction. By integrating hypothesis generation, cross-lingual evidence retrieval, and feature-aware aggregation, Litmus (Re)Agent significantly improves prediction accuracy under sparse or missing evidence conditions. Leveraging a directed acyclic graph (DAG)-orchestrated agent architecture, the proposed approach outperforms six strong baselines, with the most pronounced gains observed in settings reliant on transfer learning where direct evidence is limited.
π Abstract
We study predictive multilingual evaluation: estimating how well a model will perform on a task in a target language when direct benchmark results are missing. This problem is common in multilingual deployment, where evaluation coverage is sparse and published evidence is uneven across languages, tasks, and model families. We introduce a controlled benchmark of 1,500 questions spanning six tasks and five evidence scenarios. The benchmark separates accessible evidence from ground truth, enabling evaluation of systems that must infer missing results from incomplete literature evidence. We also present Litmus (Re)Agent, a DAG-orchestrated agentic system that decomposes queries into hypotheses, retrieves evidence, and synthesises predictions through feature-aware aggregation. Across six systems, Litmus (Re)Agent achieves the best overall performance, with the largest gains in transfer-heavy scenarios where direct evidence is weak or absent. These results show that structured agentic reasoning is a promising approach to multilingual performance estimation under incomplete evidence.