🤖 AI Summary
This study investigates the public’s acceptance mechanisms for AI-generated news—specifically, whether perceived AI news quality influences acceptance, and whether disclosing AI involvement enhances immediate engagement or sustained future readership.
Method: A pre-registered survey experiment (N = 599) randomly assigned participants to read news articles labeled as human-written, AI-assisted (human–AI collaborative rewriting), or fully AI-generated. Perceived quality was measured across three dimensions: credibility, readability, and professionalism.
Contribution/Results: No significant differences emerged in perceived quality across the three conditions. While AI disclosure significantly increased immediate reading intention, it did not improve long-term reading willingness. Critically, this study provides the first empirical evidence that public resistance to AI news stems not from concerns about output quality, but from implicit apprehensions regarding long-term value and systemic credibility—challenging the dominant “quality-driven acceptance” hypothesis. These findings offer foundational insights for ethically grounded AI news design and transparency strategies.
📝 Abstract
The advancement of artificial intelligence has led to its application in many areas, including news media, which makes it crucial to understand public reception of AI-generated news. This preregistered study investigates (i) the perceived quality of AI-assisted and AI-generated versus human-generated news articles, (ii) whether disclosure of AI's involvement in generating these news articles influences engagement with them, and (iii) whether such awareness affects the willingness to read AI-generated articles in the future. We conducted a survey experiment with 599 Swiss participants, who evaluated the credibility, readability, and expertise of news articles either written by journalists (control group), rewritten by AI (AI-assisted group), or entirely written by AI (AI-generated group). Our results indicate that all articles were perceived to be of equal quality. When participants in the treatment groups were subsequently made aware of AI's role, they expressed a higher willingness to continue reading the articles than participants in the control group. However, they were not more willing to read AI-generated news in the future. These results suggest that aversion to AI usage in news media is not primarily rooted in a perceived lack of quality, and that by disclosing using AI, journalists could induce more short-term engagement.