🤖 AI Summary
This study investigates how digital platforms can design optimal disclosure mechanisms for AI-generated content under imperfect detection capabilities and user trust frictions, balancing creator incentives, content quality, and platform governance. By developing a game-theoretic model incorporating creator heterogeneity, audience discounting of AI content, endogenous enforcement intensity, and penalty structures, the paper compares mandatory self-disclosure with platform-led detection regimes. The analysis reveals that disclosing AI-generated content is not universally beneficial; its optimality holds only within an intermediate range of AI technological maturity and cost advantage. While mandatory disclosure enhances transparency, it may reduce total creator surplus and suppress high-quality output when AI capabilities are highly advanced. The findings suggest platforms should dynamically adapt their strategies over time, progressing through distinct governance phases—deterrence, screening, and eventual regulatory relaxation—as AI capabilities evolve.
📝 Abstract
Generative artificial intelligence (Gen-AI) is reshaping content creation on digital platforms by reducing production costs and enabling scalable output of varying quality. In response, platforms have begun adopting disclosure policies that require creators to label AI-generated content, often supported by imperfect detection and penalties for non-compliance. This paper develops a formal model to study the economic implications of such disclosure regimes. We compare a non-disclosure benchmark, in which the platform alone detects AI usage, with a mandatory self-disclosure regime in which creators strategically choose whether to disclose or conceal AI use under imperfect enforcement. The model incorporates heterogeneous creators, viewer discounting of AI-labeled content, trust penalties following detected non-disclosure, and endogenous enforcement. The analysis shows that disclosure is optimal only when both the value of AI-generated content and its cost-saving advantage are intermediate. As AI capability improves, the platform's optimal enforcement strategy evolves from strict deterrence to partial screening and eventual deregulation. While disclosure reliably increases transparency, it reduces aggregate creator surplus and can suppress high-quality AI content when AI is technologically advanced. Overall, the results characterize disclosure as a strategic governance instrument whose effectiveness depends on technological maturity and trust frictions.