🤖 AI Summary
This paper addresses the inconsistency in label-flip evaluation within large language model (LLM)-driven counterfactual data augmentation (CDA). We systematically investigate how the relationship between the generator and discriminator (“judge” model) affects evaluation reliability. Introducing the first taxonomy of four generator–discriminator relationship types, we conduct empirical analysis across five generative models, fifteen discriminative models, three benchmark datasets, and a 90-participant user study. Results show that independent, non-fine-tuned discriminators significantly improve label-flip evaluation consistency and reliability; however, their predictions still substantially diverge from human judgments—revealing an inherent limitation of fully automated CDA evaluation. Our core contributions are: (1) establishing a principled, empirically grounded paradigm for discriminator selection in CDA evaluation, and (2) demonstrating the indispensable role of human supervision in achieving high-quality counterfactual augmentation.
📝 Abstract
Counterfactual examples are widely employed to enhance the performance and robustness of large language models (LLMs) through counterfactual data augmentation (CDA). However, the selection of the judge model used to evaluate label flipping, the primary metric for assessing the validity of generated counterfactuals for CDA, yields inconsistent results. To decipher this, we define four types of relationships between the counterfactual generator and judge models. Through extensive experiments involving two state-of-the-art LLM-based methods, three datasets, five generator models, and 15 judge models, complemented by a user study (n = 90), we demonstrate that judge models with an independent, non-fine-tuned relationship to the generator model provide the most reliable label flipping evaluations. Relationships between the generator and judge models, which are closely aligned with the user study for CDA, result in better model performance and robustness. Nevertheless, we find that the gap between the most effective judge models and the results obtained from the user study remains considerably large. This suggests that a fully automated pipeline for CDA may be inadequate and requires human intervention.