🤖 AI Summary
This study investigates large language models’ (LLMs) fine-grained understanding of character–location relations in narrative texts. Addressing the absence of dedicated evaluation benchmarks, we introduce two manually annotated, cross-style and cross-era datasets—Andersen’s fairy tales and Austen’s *Persuasion*—designed to probe long-range spatial tracking and context-dependent reasoning. We adopt a “context snippet + localization question” evaluation paradigm, integrating human-curated annotations with LLM-based prompt engineering (context extraction + positional inference) for systematic assessment. Results reveal that state-of-the-art models achieve only 61.85% accuracy on the Andersen dataset and 56.06% on *Persuasion*, exposing critical limitations in modeling spatiotemporal structure across extended narratives. This work establishes the first reproducible benchmark for narrative spatial reasoning, fills a key gap in LLM evaluation, and provides methodological foundations for future research on spatial semantic modeling in discourse.
📝 Abstract
The ability of machines to grasp spatial understanding within narrative contexts is an intriguing aspect of reading comprehension that continues to be studied. Motivated by the goal to test the AI's competence in understanding the relationship between characters and their respective locations in narratives, we introduce two new datasets: Andersen and Persuasion. For the Andersen dataset, we selected fifteen children's stories from"Andersen's Fairy Tales"by Hans Christian Andersen and manually annotated the characters and their respective locations throughout each story. Similarly, for the Persuasion dataset, characters and their locations in the novel"Persuasion"by Jane Austen were also manually annotated. We used these datasets to prompt Large Language Models (LLMs). The prompts are created by extracting excerpts from the stories or the novel and combining them with a question asking the location of a character mentioned in that excerpt. Out of the five LLMs we tested, the best-performing one for the Andersen dataset accurately identified the location in 61.85% of the examples, while for the Persuasion dataset, the best-performing one did so in 56.06% of the cases.