🤖 AI Summary
This study investigates how the timing of large language model (LLM) intervention affects the creative writing process, specifically comparing “initial use” (invoking the LLM at the outset of writing) versus “delayed use” (invoking only after independent ideation). We conducted a randomized controlled experiment with 60 participants, employing behavioral metrics, standardized scales measuring autonomy, creative self-efficacy, and outcome self-attribution, and mediation analysis. Results show that initial LLM use significantly reduces the quantity of original ideas and diminishes both creative self-efficacy and self-attribution for outcomes. These effects are mediated by reduced perceived autonomy and diminished sense of idea ownership. This work provides the first empirical evidence of a “creativity-suppression” risk associated with early LLM integration and proposes “delayed intervention” as a novel paradigm. It offers empirically grounded design principles to safeguard authorial agency and creative sovereignty in human–AI collaborative writing.
📝 Abstract
Large Language Models (LLMs) have been widely used to support ideation in the writing process. However, whether generating ideas with the help of LLMs leads to idea fixation or idea expansion is unclear. This study examines how different timings of LLM usage - either at the beginning or after independent ideation - affect people's perceptions and ideation outcomes in a writing task. In a controlled experiment with 60 participants, we found that using LLMs from the beginning reduced the number of original ideas and lowered creative self-efficacy and self-credit, mediated by changes in autonomy and ownership. We discuss the challenges and opportunities associated with using LLMs to assist in idea generation. We propose delaying the use of LLMs to support ideation while considering users' self-efficacy, autonomy, and ownership of the ideation outcomes.