PersonaFlow: Boosting Research Ideation with LLM-Simulated Expert Personas

📅 2024-09-19
🏛️ arXiv.org
📈 Citations: 13
Influential: 1
📄 PDF
🤖 AI Summary
Interdisciplinary creative generation is often hindered by the scarcity of domain experts and their delayed responses. To address this, we propose PersonaFlow—a novel multi-expert persona co-simulation framework that leverages large language models (LLMs) to model interdisciplinary experts’ cognitive traits, integrating interactive prompt engineering with dynamic, personalized persona adaptation to deliver high-fidelity, controllable virtual expert feedback. Empirical evaluation demonstrates significant improvements in idea relevance (+37%) and creativity (+42%), enhanced user sense of control (+63%) and idea recall rate (+58%), without increasing cognitive load. Moreover, this work provides the first systematic identification of latent ethical risks in simulation—namely, cognitive biases and overreliance on AI—thereby establishing both a methodological foundation and a practical paradigm for AI-augmented interdisciplinary innovation.

Technology Category

Application Category

📝 Abstract
Developing novel interdisciplinary research ideas often requires discussions and feedback from experts across different domains. However, obtaining timely inputs is challenging due to the scarce availability of domain experts. Recent advances in Large Language Model (LLM) research have suggested the feasibility of utilizing LLM-simulated expert personas to support research ideation. In this study, we introduce PersonaFlow, an LLM-based system using persona simulation to support the ideation stage of interdisciplinary scientific discovery. Our findings indicate that using multiple personas during ideation significantly enhances user-perceived quality of outcomes (e.g., relevance of critiques, creativity of research questions) without increasing cognitive load. We also found that users' persona customization interactions significantly improved their sense of control and recall of generated ideas. Based on the findings, we discuss highlighting ethical concerns, including potential over-reliance and cognitive biases, and suggest design implications for leveraging LLM-simulated expert personas to support research ideation when human expertise is inaccessible.
Problem

Research questions and friction points this paper is trying to address.

Limited access to expert feedback for interdisciplinary research ideation
Need for diverse domain expertise in generating creative research ideas
Potential over-reliance on AI without user customization and agency
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-simulated domain experts for diverse perspectives
Customizable expert profiles enhance user agency
Boosts creativity without increasing cognitive load