đ¤ AI Summary
This study addresses the limitation of language models in associative thinkingâi.e., cross-conceptual reasoningâby enhancing their creative capabilities. We propose a reinforcement learningâbased fine-tuning framework that, for the first time, incorporates divergent thinking metricsânovelty and conceptual connectivityâinto the reward function. To enable scalable evaluation, we design a prompt-driven, unsupervised assessment mechanism that encourages models to autonomously construct deep, cross-domain associations. The method operates entirely without human annotation, relying solely on self-supervised prompt generation to derive reward signals. Experiments across diverse generative tasksâincluding story writing, code generation, and diagram synthesisâdemonstrate substantial improvements in originality, logical coherence, and abstract cross-task transfer. Our approach consistently outperforms baseline models on multiple creativity-oriented metrics. The core contribution is a learnable associative reasoning mechanism that jointly enhances creative generation and abstract inference.
đ Abstract
Associative thinking--the ability to connect seemingly unrelated ideas--is a foundational element of human creativity and problem-solving. This paper explores whether reinforcement learning (RL) guided by associative thinking principles can enhance a model's performance across diverse generative tasks, including story writing, code generation, and chart creation. We introduce a reinforcement learning framework that uses a prompt-based evaluation mechanism, incorporating established divergent thinking metrics from creativity research. A base language model is fine-tuned using this framework to reward outputs demonstrating higher novelty through higher degrees of conceptual connectivity. Interestingly, the experimental results suggest that RL-based associative thinking-trained models not only generate more original and coherent stories but also exhibit improved abstraction and flexibility in tasks such as programming and data visualization. Our findings provide initial evidence that modeling cognitive creativity principles through reinforcement learning can yield more adaptive and generative AI.