Learning to Play Like Humans: A Framework for LLM Adaptation in Interactive Fiction Games

📅 2025-05-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current AI agents for interactive fiction (IF) overemphasize task performance while neglecting narrative comprehension and commonsense constraints. To address this, we propose a cognition-inspired large language model (LLM) adaptation framework that formally models the human “read–understand–respond” cognitive process as a three-stage cognitive alignment paradigm: (1) structured spatial map construction, (2) contextualized action instruction fine-tuning, and (3) reinforcement feedback distillation with explicit cognitive constraint injection. By integrating symbolic modeling with neural language understanding, our framework significantly improves both task completion rates and narrative consistency. Evaluated across multiple IF benchmarks, it achieves behavioral distributions closely approximating human players’, while enhancing decision interpretability and cross-scenario generalization robustness. This represents the first systematic effort to embed cognitive principles—rather than purely statistical patterns—into IF agent design, bridging the gap between functional efficacy and narratively grounded reasoning.

Technology Category

Application Category

📝 Abstract
Interactive Fiction games (IF games) are where players interact through natural language commands. While recent advances in Artificial Intelligence agents have reignited interest in IF games as a domain for studying decision-making, existing approaches prioritize task-specific performance metrics over human-like comprehension of narrative context and gameplay logic. This work presents a cognitively inspired framework that guides Large Language Models (LLMs) to learn and play IF games systematically. Our proposed **L**earning to **P**lay **L**ike **H**umans (LPLH) framework integrates three key components: (1) structured map building to capture spatial and narrative relationships, (2) action learning to identify context-appropriate commands, and (3) feedback-driven experience analysis to refine decision-making over time. By aligning LLMs-based agents' behavior with narrative intent and commonsense constraints, LPLH moves beyond purely exploratory strategies to deliver more interpretable, human-like performance. Crucially, this approach draws on cognitive science principles to more closely simulate how human players read, interpret, and respond within narrative worlds. As a result, LPLH reframes the IF games challenge as a learning problem for LLMs-based agents, offering a new path toward robust, context-aware gameplay in complex text-based environments.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM adaptation in IF games for human-like narrative comprehension
Developing structured spatial and narrative mapping for IF gameplay
Improving decision-making via feedback-driven experience analysis in IF games
Innovation

Methods, ideas, or system contributions that make the work stand out.

Structured map building for spatial narrative relationships
Action learning for context-appropriate commands
Feedback-driven experience analysis for decision refinement
🔎 Similar Papers
No similar papers found.