🤖 AI Summary
This study investigates how the disclosure of artificial intelligence (AI) involvement in crowdfunding campaigns influences investor decision-making. Drawing on signaling theory and the Aristotelian rhetorical framework, and leveraging a natural experiment from Kickstarter’s mandatory AI disclosure policy alongside four online randomized controlled trials, the research systematically demonstrates that AI disclosure negatively impacts crowdfunding performance—reducing funding by 39.8% and the number of backers by 23.9%. The adverse effects are significantly mitigated by high authenticity and high clarity in disclosure, whereas overly positive emotional tones exacerbate the harm. Mechanism analyses reveal two underlying pathways: diminished perceptions of creator competence and heightened concerns about AI-related “greenwashing.” These findings offer critical theoretical and practical implications for AI transparency policies and human-AI collaboration design.
📝 Abstract
As artificial intelligence (AI) increasingly integrates into crowdfunding practices, strategic disclosure of AI involvement has become critical. Yet, empirical insights into how different disclosure strategies influence investor decisions remain limited. Drawing on signaling theory and Aristotle's rhetorical framework, we examine how mandatory AI disclosure affects crowdfunding performance and how substantive signals (degree of AI involvement) and rhetorical signals (logos/explicitness, ethos/authenticity, pathos/emotional tone) moderate these effects. Leveraging Kickstarter's mandatory AI disclosure policy as a natural experiment and four supplementary online experiments, we find that mandatory AI disclosure significantly reduces crowdfunding performance: funds raised decline by 39.8% and backer counts by 23.9% for AI-involved projects. However, this adverse effect is systematically moderated by disclosure strategy. Greater AI involvement amplifies the negative effects of AI disclosure, while high authenticity and high explicitness mitigate them. Interestingly, excessive positive emotional tone (a strategy creators might intuitively adopt to counteract AI skepticism) backfires and exacerbates negative outcomes. Supplementary randomized experiments identify two underlying mechanisms: perceived creator competence and AI washing concerns. Substantive signals primarily affect competence judgments, whereas rhetorical signals operate through varied pathways: either mediator alone or both in sequence. These findings provide theoretical and practical insights for entrepreneurs, platforms, and policymakers strategically managing AI transparency in high-stakes investment contexts.