🤖 AI Summary
This study investigates how fictional texts—such as novels—distinctly shape the outputs of large language models compared to non-fictional corpora like news articles, Reddit posts, and Wikipedia entries. It presents the first systematic integration of literary theory, particularly Catherine Gallagher’s and James Phelan’s theories of fictionality, with computational analysis of training data in large language models. Through comparative experiments on multi-genre corpora using open-source models such as BERT, the research demonstrates that these models significantly rely on the narrative structures and sociopragmatic features prevalent in fiction. Not only do they reproduce the fictive qualities of such texts, but they also generate novel forms of social responsiveness, thereby revealing the pivotal role of fictional discourse in AI-mediated cultural production.
📝 Abstract
Generative models, like the one in ChatGPT, are powered by their training data. The models are simply next-word predictors, based on patterns learned from vast amounts of pre-existing text. Since the first generation of GPT, it is striking that the most popular datasets have included substantial collections of novels. For the engineers and research scientists who build these models, there is a common belief that the language in fiction is rich enough to cover all manner of social and communicative phenomena, yet the belief has gone mostly unexamined. How does fiction shape the outputs of generative AI? Specifically, what are novels' effects relative to other forms of text, such as newspapers, Reddit, and Wikipedia? Since the 1970s, literature scholars such as Catherine Gallagher and James Phelan have developed robust and insightful accounts of how fiction operates as a form of discourse and language. Through our study of an influential open-source model (BERT), we find that LLMs leverage familiar attributes and affordances of fiction, while also fomenting new qualities and forms of social response. We argue that if contemporary culture is increasingly shaped by generative AI and machine learning, any analysis of today's various modes of cultural production must account for a relatively novel dimension: computational training data.