🤖 AI Summary
Large language models (LLMs) exhibit limited compositional creativity (CC) in open-ended tasks such as scientific idea generation, yet no unified theoretical framework or quantitative evaluation methodology exists for CC.
Method: We propose a creativity theory grounded in novelty and practicality as core dimensions; design the first algorithmic CC evaluation task; and establish a reproducible experimental paradigm with controlled computational budgets.
Contribution/Results: We uncover the scaling laws of LLMs’ compositional creativity for the first time, identify optimal configurations of model depth and width, and empirically validate the universal “novelty–practicality trade-off” hypothesis. Our work provides a novel lens for understanding LLMs’ open-ended generalization capability and introduces a principled, reproducible benchmark for assessing compositional creativity.
📝 Abstract
Artificial intelligence (AI) systems, and large language models (LLMs) in particular, are increasingly employed for creative tasks like scientific idea generation, constituting a form of generalization from training data unaddressed by existing conceptual frameworks. Though in many ways similar to forms of compositional generalization (CG), combinatorial creativity (CC) is an open-ended ability. Instead of evaluating for accuracy or correctness against fixed targets, which would contradict the open-ended nature of CC, we propose a theoretical framework and algorithmic task for evaluating outputs by their degrees of novelty and utility. From here, we make several important empirical contributions: (1) We obtain the first insights into the scaling behavior of creativity for LLMs. (2) We discover that, for fixed compute budgets, there exist optimal model depths and widths for creative ability. (3) We find that the ideation-execution gap, whereby LLMs excel at generating novel scientific ideas but struggle to ensure their practical feasibility, may be explained by a more fundamental novelty-utility tradeoff characteristic of creativity algorithms in general. Importantly, this tradeoff remains persistent even at scale, casting doubt on the long-term creative potential of LLMs in their current form. Together, our conceptual framework and empirical findings provide a foundation for understanding and improving creativity in modern AI models, marking a new frontier in generalization abilities.