Metamorphic Testing of Large Language Models for Natural Language Processing

πŸ“… 2025-09-07
πŸ›οΈ IEEE International Conference on Software Maintenance and Evolution
πŸ“ˆ Citations: 2
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge of automatically identifying erroneous behaviors of large language models (LLMs) in NLP tasks under data-scarce or zero-shot settings, where ground-truth labels are unavailable. We propose an oracle-free defect detection method based on metamorphic testing (MT), systematically constructing and validating 191 metamorphic relations covering semantic, syntactic, and task-specific logicβ€”the most comprehensive MT study for LLMs to date. We select 36 representative relations and conduct 560,000 tests across three major LLM families, quantitatively measuring response consistency. Our experiments reveal, for the first time, systematic inconsistency in LLMs under diverse semantic-preserving transformations, empirically demonstrating MT’s efficacy in exposing robustness deficiencies. We further characterize MT’s applicability boundaries and inherent limitations. This work establishes a scalable, annotation-light paradigm for trustworthy LLM evaluation.

Technology Category

Application Category

πŸ“ Abstract
Using Large Language Models (LLMs) to perform Natural Language Processing (NLP) tasks has been becoming increasingly pervasive in recent times. The versatile nature of LLMs makes them applicable to a wide range of such tasks. While the performance of recent LLMs is generally outstanding, several studies have shown that LLMs can often produce incorrect results. Automatically identifying these faulty behaviors is extremely useful for improving the effectiveness of LLMs. One obstacle to this is the limited availability of labeled datasets, necessitating an oracle to determine the correctness of LLM behaviors. Metamorphic Testing (MT) is a popular testing approach that alleviates this oracle problem. At the core of MT are Metamorphic Relations (MRs), defining the relationship between the outputs of related inputs. MT can expose faulty behaviors without the need for explicit oracles (e.g., labeled datasets). This paper presents the most comprehensive study of MT for LLMs to date. We conducted a literature review and collected 191 MRs for NLP tasks. We implemented a representative subset (36 MRs) to conduct a series of experiments with three popular LLMs, running $sim 560 ~mathrm{K}$ metamorphic tests. The results shed light on the capabilities and opportunities of MT for LLMs, as well as its limitations.
Problem

Research questions and friction points this paper is trying to address.

Testing large language models for faulty behaviors without labeled datasets
Identifying incorrect NLP outputs through metamorphic relations and transformations
Comprehensive evaluation of metamorphic testing effectiveness across multiple LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Metamorphic testing for LLMs without oracles
Comprehensive study with 191 metamorphic relations
Implemented 36 relations for 560,000 tests
πŸ”Ž Similar Papers
No similar papers found.
S
Steven Cho
University of Auckland
S
Stefano Ruberto
JRC European Commission
Valerio Terragni
Valerio Terragni
University of Auckland
Software EngineeringSoftware TestingAI4SESE4AIMetamorphic Testing