🤖 AI Summary
Existing evaluation methods lack fine-grained, scalable assessment of small language models (SLMs) as judges for answer evaluation.
Method: We propose JudgeBoard, the first benchmark specifically designed to evaluate SLMs’ judging capabilities in mathematical and scientific commonsense reasoning. It introduces task-specific leaderboards driven by accuracy ranking and Elo scoring. Furthermore, we design MAJ (Multi-Agent Judging), an innovative multi-agent framework wherein heterogeneous SLMs engage in collaborative debate and interactive reasoning to enhance judgment reliability and consistency.
Contribution/Results: Experiments reveal that standalone SLMs significantly underperform large language models (LLMs) in judgment tasks; however, when augmented with MAJ, they surpass comparable LLMs in both accuracy and stability on benchmarks such as MATH. This demonstrates that structured collaboration among SLMs enables them to effectively perform high-level judgment tasks—challenging the assumption that only large models are suitable for such roles.
📝 Abstract
While small language models (SLMs) have shown promise on various reasoning tasks, their ability to judge the correctness of answers remains unclear compared to large language models (LLMs). Prior work on LLM-as-a-judge frameworks typically relies on comparing candidate answers against ground-truth labels or other candidate answers using predefined metrics like entailment. However, this approach is inherently indirect and difficult to fully automate, offering limited support for fine-grained and scalable evaluation of reasoning outputs. In this work, we propose JudgeBoard, a novel evaluation pipeline that directly queries models to assess the correctness of candidate answers without requiring extra answer comparisons. We focus on two core reasoning domains: mathematical reasoning and science/commonsense reasoning, and construct task-specific evaluation leaderboards using both accuracy-based ranking and an Elo-based rating system across five benchmark datasets, enabling consistent model comparison as judges rather than comparators. To improve judgment performance in lightweight models, we propose MAJ (Multi-Agent Judging), a novel multi-agent evaluation framework that leverages multiple interacting SLMs with distinct reasoning profiles to approximate LLM-level judgment accuracy through collaborative deliberation. Experimental results reveal a significant performance gap between SLMs and LLMs in isolated judging tasks. However, our MAJ framework substantially improves the reliability and consistency of SLMs. On the MATH dataset, MAJ using smaller-sized models as backbones performs comparatively well or even better than their larger-sized counterparts. Our findings highlight that multi-agent SLM systems can potentially match or exceed LLM performance in judgment tasks, with implications for scalable and efficient assessment.