Galton's Law of Mediocrity: Why Large Language Models Regress to the Mean and Fail at Creativity in Advertising

📅 2025-09-30
📈 Citations: 0
✨ Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit “Galtonian regression to the mean” in advertising copy generation—over-converging to statistically frequent patterns, thereby suppressing metaphorical expression, flattening emotional resonance, and impoverishing visual imagery, ultimately degrading originality. This paper formalizes this theoretical mechanism and empirically validates it via an advertising-specific stress test, input simplification–regeneration experiments, and a mixed quantitative–qualitative evaluation framework. Results show that structured prompting—particularly domain-specific cue injection—improves stylistic balance and creative fidelity, yet fails to overcome entrenched templatic constraints. Key contributions include: (1) the first attribution of creative decay to LLMs’ intrinsic regression-to-the-mean bias; (2) establishment of a reproducible, task-grounded evaluation paradigm for creative capability; and (3) provision of both theoretical grounding and engineering guidance for developing creativity-aware LLMs.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) generate fluent text yet often default to safe, generic phrasing, raising doubts about their ability to handle creativity. We formalize this tendency as a Galton-style regression to the mean in language and evaluate it using a creativity stress test in advertising concepts. When ad ideas were simplified step by step, creative features such as metaphors, emotions, and visual cues disappeared early, while factual content remained, showing that models favor high-probability information. When asked to regenerate from simplified inputs, models produced longer outputs with lexical variety but failed to recover the depth and distinctiveness of the originals. We combined quantitative comparisons with qualitative analysis, which revealed that the regenerated texts often appeared novel but lacked true originality. Providing ad-specific cues such as metaphors, emotional hooks and visual markers improved alignment and stylistic balance, though outputs still relied on familiar tropes. Taken together, the findings show that without targeted guidance, LLMs drift towards mediocrity in creative tasks; structured signals can partially counter this tendency and point towards pathways for developing creativity-sensitive models.
Problem

Research questions and friction points this paper is trying to address.

LLMs default to safe generic phrasing in creative tasks like advertising
Models favor high-probability information losing metaphors and emotional cues
Without targeted guidance LLMs drift toward mediocrity lacking true originality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Used stepwise simplification to reveal creative feature loss
Combined quantitative and qualitative analysis of text regeneration
Employed ad-specific cues to partially counter mediocrity drift
🔎 Similar Papers
No similar papers found.
M
Matt Keon
55mv Research Lab
Aabid Karim
Aabid Karim
AI researcher at 55mV
Machine LearningNeural Networks
Bhoomika Lohana
Bhoomika Lohana
ML Researcher at 55mV
Machine learningLLMs
Abdul Karim
Abdul Karim
Griffith University
Data ScienceMachine LearningMathematical Modeling
T
Thai Nguyen
Monash University
T
Tara Hamilton
School of Engineering, Western Sydney University
A
Ali Abbas
55mv Research Lab