🤖 AI Summary
Novice founders often struggle with entrepreneurial ambiguity—such as risk identification, hypothesis validation, and uncertainty-driven decision-making—due to underdeveloped metacognitive capabilities; meanwhile, mentors face constraints in time and understanding of novices’ cognitive needs, limiting personalized guidance. To address this, we propose a human-AI collaborative, proactive entrepreneurship mentoring system that innovatively integrates domain-specific cognitive models with large language models (LLMs), ensuring transparent, interpretable, and mentor-auditable AI reasoning. We introduce the first dual-agent (mentor-mentee) metacognitive intervention framework, enabling dynamic co-regulation of affective alignment and strategic focus. Field deployment demonstrates significant improvements in mentoring depth, intentionality, and attentional focus; enhanced mentee metacognition and mentor capacity for affective response planning; and reveals critical challenges—including AI trust calibration, misjudgment mitigation, and expectation management.
📝 Abstract
Entrepreneurship requires navigating open-ended, ill-defined problems: identifying risks, challenging assumptions, and making strategic decisions under deep uncertainty. Novice founders often struggle with these metacognitive demands, while mentors face limited time and visibility to provide tailored support. We present a human-AI coaching system that combines a domain-specific cognitive model of entrepreneurial risk with a large language model (LLM) to proactively scaffold both novice and mentor thinking. The system proactively poses diagnostic questions that challenge novices' thinking and helps both novices and mentors plan for more focused and emotionally attuned meetings. Critically, mentors can inspect and modify the underlying cognitive model, shaping the logic of the system to reflect their evolving needs. Through an exploratory field deployment, we found that using the system supported novice metacognition, helped mentors plan emotionally attuned strategies, and improved meeting depth, intentionality, and focus--while also surfaced key tensions around trust, misdiagnosis, and expectations of AI. We contribute design principles for proactive AI systems that scaffold metacognition and human-human collaboration in complex, ill-defined domains, offering implications for similar domains like healthcare, education, and knowledge work.