🤖 AI Summary
In gray-box real-time service gaming regression testing—where source code is unavailable, test cases rely heavily on manual construction, redundancy is severe, and semantic-driven prioritization is lacking—this paper proposes the first semantic-aware testing framework integrating large language models (LLMs) with reinforcement learning. The framework unifies test generation, multi-objective optimization-based test suite minimization, and version-change-driven dynamic prioritization. It introduces an LLM-guided, goal-oriented exploration strategy and enhances test relevance via semantic analysis of code change logs. Evaluated on Overcooked Plus and Minecraft, the framework reduces execution overhead by 37.2% on average compared to baseline methods and manual testing, while improving fault detection rate by 21.8%. These results demonstrate its superior balance of cost efficiency, automation capability, and defect-detection effectiveness.
📝 Abstract
The rapid iteration cycles of modern live-service games make regression testing indispensable for maintaining quality and stability. However, existing regression testing approaches face critical limitations, especially in common gray-box settings where full source code access is unavailable: they heavily rely on manual effort for test case construction, struggle to maintain growing suites plagued by redundancy, and lack efficient mechanisms for prioritizing relevant tests. These challenges result in excessive testing costs, limited automation, and insufficient bug detection. To address these issues, we propose SAGE, a semanticaware regression testing framework for gray-box game environments. SAGE systematically addresses the core challenges of test generation, maintenance, and selection. It employs LLM-guided reinforcement learning for efficient, goal-oriented exploration to automatically generate a diverse foundational test suite. Subsequently, it applies a semantic-based multi-objective optimization to refine this suite into a compact, high-value subset by balancing cost, coverage, and rarity. Finally, it leverages LLM-based semantic analysis of update logs to prioritize test cases most relevant to version changes, enabling efficient adaptation across iterations. We evaluate SAGE on two representative environments, Overcooked Plus and Minecraft, comparing against both automated baselines and human-recorded test cases. Across all environments, SAGE achieves superior bug detection with significantly lower execution cost, while demonstrating strong adaptability to version updates.