High-quality generation of dynamic game content via small language models: A proof of concept

📅 2026-01-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges faced by large language models in dynamic game content generation—namely narrative incoherence, high computational cost, and cloud dependency—while acknowledging that small language models, despite their suitability for local deployment, often suffer from insufficient output quality. To overcome these limitations, the authors propose training specialized small models through highly structured task constraints and aggressive fine-tuning, leveraging domain-specific data synthesized via directed acyclic graphs (DAGs). The approach is validated in the context of minimizing reputation-based argument cycles in role-playing games, demonstrating predictable latency, real-time performance compatible with standard game engines, and a clear trade-off between task scope and model specialization. This establishes a practical foundation for deploying modular intelligent agents in interactive narrative systems.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) offer promise for dynamic game content generation, but they face critical barriers, including narrative incoherence and high operational costs. Due to their large size, they are often accessed in the cloud, limiting their application in offline games. Many of these practical issues are solved by pivoting to small language models (SLMs), but existing studies using SLMs have resulted in poor output quality. We propose a strategy of achieving high-quality SLM generation through aggressive fine-tuning on deliberately scoped tasks with narrow context, constrained structure, or both. In short, more difficult tasks require narrower scope and higher specialization to the training corpus. Training data is synthetically generated via a DAG-based approach, grounding models in the specific game world. Such models can form the basis for agentic networks designed around the narratological framework at hand, representing a more practical and robust solution than cloud-dependent LLMs. To validate this approach, we present a proof-of-concept focusing on a single specialized SLM as the fundamental building block. We introduce a minimal RPG loop revolving around rhetorical battles of reputations, powered by this model. We demonstrate that a simple retry-until-success strategy reaches adequate quality (as defined by an LLM-as-a-judge scheme) with predictable latency suitable for real-time generation. While local quality assessment remains an open question, our results demonstrate feasibility for real-time generation under typical game engine constraints.
Problem

Research questions and friction points this paper is trying to address.

dynamic game content generation
small language models
offline games
narrative coherence
real-time generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

small language models
dynamic game content generation
task-scoped fine-tuning
synthetic data generation
real-time narrative generation
🔎 Similar Papers
No similar papers found.
M
Morten I. K. Munk
brAIn lab, IT University of Copenhagen; Raw Power Labs, Copenhagen, Denmark
A
Arturo Valdivia
Data Science Section, IT University of Copenhagen, Copenhagen, Denmark
Paolo Burelli
Paolo Burelli
Associate Professor
Artificial IntelligenceData MiningComputer Games