SAGE: Semantic-Aware Gray-Box Game Regression Testing with Large Language Models

📅 2025-11-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In gray-box real-time service gaming regression testing—where source code is unavailable, test cases rely heavily on manual construction, redundancy is severe, and semantic-driven prioritization is lacking—this paper proposes the first semantic-aware testing framework integrating large language models (LLMs) with reinforcement learning. The framework unifies test generation, multi-objective optimization-based test suite minimization, and version-change-driven dynamic prioritization. It introduces an LLM-guided, goal-oriented exploration strategy and enhances test relevance via semantic analysis of code change logs. Evaluated on Overcooked Plus and Minecraft, the framework reduces execution overhead by 37.2% on average compared to baseline methods and manual testing, while improving fault detection rate by 21.8%. These results demonstrate its superior balance of cost efficiency, automation capability, and defect-detection effectiveness.

Technology Category

Application Category

📝 Abstract
The rapid iteration cycles of modern live-service games make regression testing indispensable for maintaining quality and stability. However, existing regression testing approaches face critical limitations, especially in common gray-box settings where full source code access is unavailable: they heavily rely on manual effort for test case construction, struggle to maintain growing suites plagued by redundancy, and lack efficient mechanisms for prioritizing relevant tests. These challenges result in excessive testing costs, limited automation, and insufficient bug detection. To address these issues, we propose SAGE, a semanticaware regression testing framework for gray-box game environments. SAGE systematically addresses the core challenges of test generation, maintenance, and selection. It employs LLM-guided reinforcement learning for efficient, goal-oriented exploration to automatically generate a diverse foundational test suite. Subsequently, it applies a semantic-based multi-objective optimization to refine this suite into a compact, high-value subset by balancing cost, coverage, and rarity. Finally, it leverages LLM-based semantic analysis of update logs to prioritize test cases most relevant to version changes, enabling efficient adaptation across iterations. We evaluate SAGE on two representative environments, Overcooked Plus and Minecraft, comparing against both automated baselines and human-recorded test cases. Across all environments, SAGE achieves superior bug detection with significantly lower execution cost, while demonstrating strong adaptability to version updates.
Problem

Research questions and friction points this paper is trying to address.

Automates test generation for gray-box game regression testing
Optimizes test suites to reduce redundancy and improve coverage
Prioritizes tests based on semantic analysis of version updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-guided reinforcement learning for test generation
Semantic-based multi-objective optimization for test refinement
LLM-based semantic analysis for test prioritization
🔎 Similar Papers
No similar papers found.
Jinyu Cai
Jinyu Cai
Waseda University, Tokyo, Japan.
Jialong Li
Jialong Li
Waseda University
self-adaptive systemsrequirement engineeringhuman-in-the-loop
N
Nianyu Li
Independent Researcher, Beijing, China.
Z
Zhenyu Mao
City University of Hong Kong, Hong Kong, China.
M
Mingyue Zhang
Southwest University, Chongqing, China.
Kenji Tei
Kenji Tei
Institute of Science Tokyo
software architecturerequirement engineeringself-adaptive systemsformal verification