🤖 AI Summary
Card game prototyping faces challenges including time-consuming ideation and gameplay validation, limited mechanic novelty, poor code consistency, and scalability constraints in AI-based evaluation. Method: This paper proposes an end-to-end LLM-driven automation framework comprising: (1) a novel graph-structured indexing mechanism generator that transcends traditional database limitations to enhance mechanic originality; (2) a multi-stage LLM orchestration pipeline—from concept generation and rule formalization to executable code synthesis and AI policy derivation—with integrated code consistency verification; and (3) a self-play reinforcement learning–based AI evaluator employing an ensemble action-value function for large-scale gameplay assessment. Results: Experiments demonstrate that the framework significantly reduces prototyping cycle time and outperforms baseline methods in mechanic novelty, code correctness rate, and AI win rate.
📝 Abstract
The prototyping of computer games, particularly card games, requires extensive human effort in creative ideation and gameplay evaluation. Recent advances in Large Language Models (LLMs) offer opportunities to automate and streamline these processes. However, it remains challenging for LLMs to design novel game mechanics beyond existing databases, generate consistent gameplay environments, and develop scalable gameplay AI for large-scale evaluations. This paper addresses these challenges by introducing a comprehensive automated card game prototyping framework. The approach highlights a graph-based indexing method for generating novel game designs, an LLM-driven system for consistent game code generation validated by gameplay records, and a gameplay AI constructing method that uses an ensemble of LLM-generated action-value functions optimized through self-play. These contributions aim to accelerate card game prototyping, reduce human labor, and lower barriers to entry for game developers.