🤖 AI Summary
This study addresses ethical challenges—including authenticity perception, authorship attribution, and platform governance—arising from AI-generated video platforms (exemplified by Sora). Using qualitative content and thematic analysis of user comments, it identifies four core sociotechnical negotiation dynamics: users’ emergent role as critical assessors of realism; a structural shift in creator identity from “producer” to “prompt engineer”; persistent blurring of the real/virtual boundary in practice; and strategic user–platform rule interactions. Findings reveal complex negotiation mechanisms in concrete domains such as copyright delineation, prompt attribution rights, deepfake detection, and rule circumvention. The study introduces, for the first time, a conceptual framework of “sociotechnical tension” specific to AI video platforms. Grounded in empirical evidence, this framework advances theoretical understanding and informs policy design for generative AI content governance and human–AI collaboration norms.
📝 Abstract
As AI-generated video platforms rapidly advance, ethical challenges such as copyright infringement emerge. This study examines how users make sense of AI-generated videos on OpenAI's Sora by conducting a qualitative content analysis of user comments. Through a thematic analysis, we identified four dynamics that characterize how users negotiate authenticity, authorship, and platform governance on Sora. First, users acted as critical evaluators of realism, assessing micro-details such as lighting, shadows, fluid motion, and physics to judge whether AI-generated scenes could plausibly exist. Second, users increasingly shifted from passive viewers to active creators, expressing curiosity about prompts, techniques, and creative processes. Text prompts were perceived as intellectual property, generating concerns about plagiarism and remixing norms. Third, users reported blurred boundaries between real and synthetic media, worried about misinformation, and even questioned the authenticity of other commenters, suspecting bot-generated engagement. Fourth, users contested platform governance: some perceived moderation as inconsistent or opaque, while others shared tactics for evading prompt censorship through misspellings, alternative phrasing, emojis, or other languages. Despite this, many users also enforced ethical norms by discouraging the misuse of real people's images or disrespectful content. Together, these patterns highlighted how AI-mediated platforms complicate notions of reality, creativity, and rule-making in emerging digital ecosystems. Based on the findings, we discuss governance challenges in Sora and how user negotiations inform future platform governance.