IPCGRL: Language-Instructed Reinforcement Learning for Procedural Level Generation

📅 2025-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limited controllability and poor generalization across unseen natural language instructions in procedural game level generation using deep reinforcement learning (DRL). We propose a natural language instruction-driven DRL framework. Our core innovation is the first joint modeling of task-customized sentence embeddings—derived from a BERT variant fine-tuned on instruction data—and the DRL policy network, enabling semantic-aware, instruction-conditioned action selection. By extending the conditional input modality to structured linguistic semantics, our approach significantly improves zero-shot generalization to novel instructions. Evaluated in a 2D level generation environment, the method achieves a 21.4% improvement in instruction controllability and a 17.2% gain in generalization accuracy on unseen instructions. This establishes a new paradigm for language-guided, controllable content generation in procedural authoring.

Technology Category

Application Category

📝 Abstract
Recent research has highlighted the significance of natural language in enhancing the controllability of generative models. While various efforts have been made to leverage natural language for content generation, research on deep reinforcement learning (DRL) agents utilizing text-based instructions for procedural content generation remains limited. In this paper, we propose IPCGRL, an instruction-based procedural content generation method via reinforcement learning, which incorporates a sentence embedding model. IPCGRL fine-tunes task-specific embedding representations to effectively compress game-level conditions. We evaluate IPCGRL in a two-dimensional level generation task and compare its performance with a general-purpose embedding method. The results indicate that IPCGRL achieves up to a 21.4% improvement in controllability and a 17.2% improvement in generalizability for unseen instructions. Furthermore, the proposed method extends the modality of conditional input, enabling a more flexible and expressive interaction framework for procedural content generation.
Problem

Research questions and friction points this paper is trying to address.

Enhance procedural content generation using natural language instructions.
Improve controllability and generalizability in level generation tasks.
Extend conditional input modalities for flexible content generation.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses reinforcement learning for procedural generation
Incorporates sentence embedding for text instructions
Enhances controllability and generalizability in generation
🔎 Similar Papers
No similar papers found.
In-Chang Baek
In-Chang Baek
AI Graduate School, GIST
Procedural Content GenerationGame Artificial Intelligence
S
Sung-Hyun Kim
Gwangju Institute of Science and Technology (GIST), South Korea
S
Seo-Young Lee
Gwangju Institute of Science and Technology (GIST), South Korea
D
Dong-Hyeun Lee
Dongseo University, South Korea
Kyung-Joong Kim
Kyung-Joong Kim
Professor, Department of AI Convergence, GIST
Artificial IntelligenceGamesGame AI