SimBA: Simplifying Benchmark Analysis Using Performance Matrices Alone

📅 2025-10-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the complexity and inefficiency of large-scale language model (LM) benchmarking—which hinders rapid model selection and dataset validation—this paper introduces SimBA, a novel framework that automatically identifies a highly representative minimal subset of benchmark data using only raw model scores across datasets. SimBA comprises three stages: *stalk* (model-dataset relationship modeling), *prowl* (compact subset discovery), and *pounce* (performance prediction based on the subset), jointly optimizing for coverage fidelity and rank preservation. Evaluated on HELM, MMLU, and BigBenchLite, SimBA achieves ≥95% full-benchmark performance coverage using only 6.25%, 1.7%, and 28.4% of the original data, respectively, while preserving model rankings with high stability and yielding near-zero prediction error for unseen models. By drastically reducing benchmark size without sacrificing fidelity or interpretability, SimBA significantly enhances the efficiency, scalability, and transparency of LM evaluation.

Technology Category

Application Category

📝 Abstract
Modern language models are evaluated on large benchmarks, which are difficult to make sense of, especially for model selection. Looking at the raw evaluation numbers themselves using a model-centric lens, we propose SimBA, a three phase framework to Simplify Benchmark Analysis. The three phases of SimBA are: stalk, where we conduct dataset & model comparisons, prowl, where we discover a representative subset, and pounce, where we use the representative subset to predict performance on a held-out set of models. Applying SimBA to three popular LM benchmarks: HELM, MMLU, and BigBenchLite reveals that across all three benchmarks, datasets and models relate strongly to one another (stalk). We develop an representative set discovery algorithm which covers a benchmark using raw evaluation scores alone. Using our algorithm, we find that with 6.25% (1/16), 1.7% (1/58), and 28.4% (21/74) of the datasets for HELM, MMLU, and BigBenchLite respectively, we achieve coverage levels of at least 95% (prowl). Additionally, using just these representative subsets, we can both preserve model ranks and predict performance on a held-out set of models with near zero mean-squared error (pounce). Taken together, SimBA can help model developers improve efficiency during model training and dataset creators validate whether their newly created dataset differs from existing datasets in a benchmark. Our code is open source, available at https://github.com/nishantsubramani/simba.
Problem

Research questions and friction points this paper is trying to address.

Simplifying benchmark analysis for large language model evaluation
Identifying representative dataset subsets to predict model performance
Reducing evaluation complexity while preserving model ranking accuracy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Three-phase framework simplifies benchmark analysis process
Uses performance matrices alone for representative subset discovery
Predicts model performance with minimal datasets accurately
🔎 Similar Papers
No similar papers found.