Truth or Twist? Optimal Model Selection for Reliable Label Flipping Evaluation in LLM-based Counterfactuals

📅 2025-05-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the inconsistency in label-flip evaluation within large language model (LLM)-driven counterfactual data augmentation (CDA). We systematically investigate how the relationship between the generator and discriminator (“judge” model) affects evaluation reliability. Introducing the first taxonomy of four generator–discriminator relationship types, we conduct empirical analysis across five generative models, fifteen discriminative models, three benchmark datasets, and a 90-participant user study. Results show that independent, non-fine-tuned discriminators significantly improve label-flip evaluation consistency and reliability; however, their predictions still substantially diverge from human judgments—revealing an inherent limitation of fully automated CDA evaluation. Our core contributions are: (1) establishing a principled, empirically grounded paradigm for discriminator selection in CDA evaluation, and (2) demonstrating the indispensable role of human supervision in achieving high-quality counterfactual augmentation.

Technology Category

Application Category

📝 Abstract
Counterfactual examples are widely employed to enhance the performance and robustness of large language models (LLMs) through counterfactual data augmentation (CDA). However, the selection of the judge model used to evaluate label flipping, the primary metric for assessing the validity of generated counterfactuals for CDA, yields inconsistent results. To decipher this, we define four types of relationships between the counterfactual generator and judge models. Through extensive experiments involving two state-of-the-art LLM-based methods, three datasets, five generator models, and 15 judge models, complemented by a user study (n = 90), we demonstrate that judge models with an independent, non-fine-tuned relationship to the generator model provide the most reliable label flipping evaluations. Relationships between the generator and judge models, which are closely aligned with the user study for CDA, result in better model performance and robustness. Nevertheless, we find that the gap between the most effective judge models and the results obtained from the user study remains considerably large. This suggests that a fully automated pipeline for CDA may be inadequate and requires human intervention.
Problem

Research questions and friction points this paper is trying to address.

Inconsistent results in judge model selection for label flipping evaluation
Optimal relationship between generator and judge models for reliable evaluations
Gap between automated CDA pipeline and human intervention needs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Independent non-fine-tuned judge models ensure reliability
Generator-judge model alignment improves performance robustness
Human intervention needed for automated CDA pipeline
🔎 Similar Papers
No similar papers found.