Vintage Code, Modern Judges: Meta-Validation in Low Data Regimes

๐Ÿ“… 2025-10-31
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
In modernizing legacy languages like COBOL, the scarcity of domain experts and manually annotated evaluation data hinders the trustworthy deployment of large language models as judges (LaaJ). Method: This paper proposes SparseAlignโ€”a framework for low-resource settings that quantifies LaaJ alignment with human judgments in both ranking consistency and score proximity via pairwise confidence modeling and score-sensitive alignment metrics, requiring only minimal human annotations. It avoids reliance on large-scale labeling or unvalidated cyclic evaluation. Contribution/Results: SparseAlign significantly improves the feasibility and robustness of reliability validation for LaaJ. Evaluated on COBOL code interpretation, it successfully identifies high-alignment LaaJ models and directly informs model release decisions, demonstrating practical efficacy.

Technology Category

Application Category

๐Ÿ“ Abstract
Application modernization in legacy languages such as COBOL, PL/I, and REXX faces an acute shortage of resources, both in expert availability and in high-quality human evaluation data. While Large Language Models as a Judge (LaaJ) offer a scalable alternative to expert review, their reliability must be validated before being trusted in high-stakes workflows. Without principled validation, organizations risk a circular evaluation loop, where unverified LaaJs are used to assess model outputs, potentially reinforcing unreliable judgments and compromising downstream deployment decisions. Although various automated approaches to validating LaaJs have been proposed, alignment with human judgment remains a widely used and conceptually grounded validation strategy. In many real-world domains, the availability of human-labeled evaluation data is severely limited, making it difficult to assess how well a LaaJ aligns with human judgment. We introduce SparseAlign, a formal framework for assessing LaaJ alignment with sparse human-labeled data. SparseAlign combines a novel pairwise-confidence concept with a score-sensitive alignment metric that jointly capture ranking consistency and score proximity, enabling reliable evaluator selection even when traditional statistical methods are ineffective due to limited annotated examples. SparseAlign was applied internally to select LaaJs for COBOL code explanation. The top-aligned evaluators were integrated into assessment workflows, guiding model release decisions. We present a case study of four LaaJs to demonstrate SparseAlign's utility in real-world evaluation scenarios.
Problem

Research questions and friction points this paper is trying to address.

Validating LLM judges with limited human evaluation data
Addressing circular evaluation risks in automated code assessment
Ensuring alignment between AI judges and human judgment
Innovation

Methods, ideas, or system contributions that make the work stand out.

SparseAlign framework assesses LaaJ alignment with sparse human data
Combines pairwise-confidence concept with score-sensitive alignment metric
Enables reliable evaluator selection in low data regimes
๐Ÿ”Ž Similar Papers
No similar papers found.
O
Ora Fandina
IBM Research, Israel
G
Gal Amram
IBM Research, Israel
Eitan Farchi
Eitan Farchi
IBM Research Lab in Haifa
test optimizationreviewsconcurrency
Shmulik Froimovich
Shmulik Froimovich
Unknown affiliation
R
Raviv Gal
IBM Research, Israel
W
Wesam Ibraheem
IBM Research, Israel
R
Rami Katan
IBM Research, Israel
A
Alice Podolsky
IBM Research, Israel
O
Orna Raz
IBM Research, Israel