🤖 AI Summary
This study investigates large language models’ (LLMs) capacity to preserve semantic isotopies—coherent, recurring semantic threads—during story continuation. Using 10,000 ROCStories prompts, five state-of-the-art models (including GPT-4o) generated continuations, enabling the first systematic evaluation of semantic structural continuity across character, event, emotion, and theme dimensions. Methodologically, we integrate distributional semantic modeling with structured narrative analysis, proposing three interpretable quantitative metrics—coverage, density, and diffusion—to measure isotopic preservation. Results show that, under fixed output length constraints, all models maintain semantic isotopy robustly, exhibiting particularly strong consistency in core narrative elements. This work extends the empirical foundation of semantic coherence theory within generative AI and introduces a novel, fine-grained evaluation paradigm for controllable narrative generation.
📝 Abstract
In this work, we explore the relevance of textual semantics to Large Language Models (LLMs), extending previous insights into the connection between distributional semantics and structural semantics. We investigate whether LLM-generated texts preserve semantic isotopies. We design a story continuation experiment using 10,000 ROCStories prompts completed by five LLMs. We first validate GPT-4o's ability to extract isotopies from a linguistic benchmark, then apply it to the generated stories. We then analyze structural (coverage, density, spread) and semantic properties of isotopies to assess how they are affected by completion. Results show that LLM completion within a given token horizon preserves semantic isotopies across multiple properties.