π€ AI Summary
This work addresses the challenge of automatically identifying erroneous behaviors of large language models (LLMs) in NLP tasks under data-scarce or zero-shot settings, where ground-truth labels are unavailable. We propose an oracle-free defect detection method based on metamorphic testing (MT), systematically constructing and validating 191 metamorphic relations covering semantic, syntactic, and task-specific logicβthe most comprehensive MT study for LLMs to date. We select 36 representative relations and conduct 560,000 tests across three major LLM families, quantitatively measuring response consistency. Our experiments reveal, for the first time, systematic inconsistency in LLMs under diverse semantic-preserving transformations, empirically demonstrating MTβs efficacy in exposing robustness deficiencies. We further characterize MTβs applicability boundaries and inherent limitations. This work establishes a scalable, annotation-light paradigm for trustworthy LLM evaluation.
π Abstract
Using Large Language Models (LLMs) to perform Natural Language Processing (NLP) tasks has been becoming increasingly pervasive in recent times. The versatile nature of LLMs makes them applicable to a wide range of such tasks. While the performance of recent LLMs is generally outstanding, several studies have shown that LLMs can often produce incorrect results. Automatically identifying these faulty behaviors is extremely useful for improving the effectiveness of LLMs. One obstacle to this is the limited availability of labeled datasets, necessitating an oracle to determine the correctness of LLM behaviors. Metamorphic Testing (MT) is a popular testing approach that alleviates this oracle problem. At the core of MT are Metamorphic Relations (MRs), defining the relationship between the outputs of related inputs. MT can expose faulty behaviors without the need for explicit oracles (e.g., labeled datasets). This paper presents the most comprehensive study of MT for LLMs to date. We conducted a literature review and collected 191 MRs for NLP tasks. We implemented a representative subset (36 MRs) to conduct a series of experiments with three popular LLMs, running $sim 560 ~mathrm{K}$ metamorphic tests. The results shed light on the capabilities and opportunities of MT for LLMs, as well as its limitations.