Enhancing Differential Testing With LLMs For Testing Deep Learning Libraries

📅 2024-06-12
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Differential testing of deep learning (DL) libraries faces two key bottlenecks: (1) difficulty in automatically identifying semantically equivalent API counterparts across libraries, and (2) insufficient input diversity, hindering effective oracle problem mitigation. To address these, we propose the first LLM-driven framework for API counterpart synthesis. Our approach integrates static analysis to extract path constraints—guiding the generation of diverse, semantically valid inputs—and uniquely incorporates LLM-based semantic understanding of both DL libraries and their upstream dependencies into the differential testing pipeline. Evaluated on TensorFlow and PyTorch, our method achieves an 84% increase in API counterpart coverage, a 7.23% improvement in branch coverage, and an 88% boost in defect detection rate. It uncovered 71 defects, of which 59 were confirmed by developers—including 46 previously unknown bugs, 10 of which have already been fixed in newer releases.

Technology Category

Application Category

📝 Abstract
Differential testing offers a promising strategy to alleviate the test oracle problem by comparing the test results between alternative implementations. However, existing differential testing techniques for deep learning (DL) libraries are limited by the key challenges of finding alternative implementations (called counterparts) for a given API and subsequently generating diverse test inputs. To address the two challenges, this paper introduces DLLens, an LLM-enhanced differential testing technique for DL libraries. To address the first challenge, DLLens incorporates an LLM-based counterpart synthesis workflow, with the insight that the counterpart of a given DL library API's computation could be successfully synthesized through certain composition and adaptation of the APIs from another DL library. To address the second challenge, DLLens incorporates a static analysis technique that extracts the path constraints from the implementations of a given API and its counterpart to guide diverse test input generation. The extraction is facilitated by LLM's knowledge of the concerned DL library and its upstream libraries. We evaluate DLLens on two popular DL libraries, TensorFlow and PyTorch. Our evaluation shows that DLLens synthesizes counterparts for 1.84 times as many APIs as those found by state-of-the-art techniques on these libraries. Moreover, under the same time budget, DLLens covers 7.23% more branches and detects 1.88 times as many bugs as state-of-the-art techniques on 200 randomly sampled APIs. DLLens has successfully detected 71 bugs in recent TensorFlow and PyTorch libraries. Among them, 59 are confirmed by developers, including 46 confirmed as previously unknown bugs, and 10 of these previously unknown bugs have been fixed in the latest version of TensorFlow and PyTorch.
Problem

Research questions and friction points this paper is trying to address.

Finding alternative implementations for DL library APIs
Generating diverse test inputs for differential testing
Enhancing bug detection in TensorFlow and PyTorch libraries
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-based counterpart synthesis for DL APIs
Static analysis extracts path constraints for inputs
Enhanced differential testing with LLM guidance
🔎 Similar Papers
No similar papers found.
M
Meiziniu Li
The Hong Kong University of Science and Technology, Hong Kong, China
D
Dongze Li
The Hong Kong University of Science and Technology, Hong Kong, China
J
Jianmeng Liu
The Hong Kong University of Science and Technology, Hong Kong, China
Jialun Cao
Jialun Cao
The Hong Kong University of Science and Technology
SE for AIAI for SE
Yongqiang Tian
Yongqiang Tian
Monash University
Software Testing and DebuggingSoftware Engineering
S
S. Cheung
The Hong Kong University of Science and Technology, Hong Kong, China