LLMORPH: Automated Metamorphic Testing of Large Language Models

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Automated testing is essential for evaluating and improving the reliability of Large Language Models (LLMs), yet the lack of automated oracles for verifying output correctness remains a key challenge. We present LLMORPH, an automated testing tool specifically designed for LLMs performing NLP tasks, which leverages Metamorphic Testing (MT) to uncover faulty behaviors without relying on human-labeled data. MT uses Metamorphic Relations (MRs) to generate follow-up inputs from source test input, enabling detection of inconsistencies in model outputs without the need of expensive labelled data. LLMORPH is aimed at researchers and developers who want to evaluate the robustness of LLM-based NLP systems. In this paper, we detail the design, implementation, and practical usage of LLMORPH, demonstrating how it can be easily extended to any LLM, NLP task, and set of MRs. In our evaluation, we applied 36 MRs across four NLP benchmarks, testing three state-of-the-art LLMs: GPT-4, LLAMA3, and HERMES 2. This produced over 561,000 test executions. Results demonstrate LLMORPH's effectiveness in automatically exposing inconsistencies.
Problem

Research questions and friction points this paper is trying to address.

Large Language Models
Automated Testing
Metamorphic Testing
Oracle Problem
NLP Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

Metamorphic Testing
Large Language Models
Automated Testing
Metamorphic Relations
NLP Robustness
🔎 Similar Papers
S
Steven Cho
University of Auckland
S
Stefano Ruberto
JRC European Commission
Valerio Terragni
Valerio Terragni
University of Auckland
Software EngineeringSoftware TestingAI4SESE4AIMetamorphic Testing