Enhancing Media Literacy: The Effectiveness of (Human) Annotations and Bias Visualizations on Bias Detection

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how human- versus AI-generated media bias annotations and visualizations enhance the public’s ability to detect bias across diverse topics. Using a two-factor randomized controlled trial (N=1316), it systematically compares annotation source (human vs. AI), granularity (phrase-level vs. sentence-level), and visualization format in terms of generalizability. Results show—first empirically—that human annotations significantly outperform AI annotations in transferable learning (Cohen’s d=0.42, p<.001); phrase-level human annotation is the optimal training paradigm (F=44.00, p<.001); and AI annotations still yield statistically significant improvement (d=0.23, p=.039). The study introduces the first generalizability assessment framework for bias recognition learning and demonstrates that exposure to annotated materials alone produces baseline gains (d=0.21). These findings provide theoretical grounding and empirical evidence for designing human-AI collaborative annotation strategies in media literacy education.

Technology Category

Application Category

📝 Abstract
Marking biased texts is a practical approach to increase media bias awareness among news consumers. However, little is known about the generalizability of such awareness to new topics or unmarked news articles, and the role of machine-generated bias labels in enhancing awareness remains unclear. This study tests how news consumers may be trained and pre-bunked to detect media bias with bias labels obtained from different sources (Human or AI) and in various manifestations. We conducted two experiments with 470 and 846 participants, exposing them to various bias-labeling conditions. We subsequently tested how much bias they could identify in unlabeled news materials on new topics. The results show that both Human (t(467) = 4.55, p<.001, d = 0.42) and AI labels (t(467) = 2.49, p = .039, d = 0.23) increased correct detection compared to the control group. Human labels demonstrate larger effect sizes and higher statistical significance. The control group (t(467) = 4.51, p<.001, d = 0.21) also improves performance through mere exposure to study materials. We also find that participants trained with marked biased phrases detected bias most reliably (F(834,1) = 44.00, p<.001, {eta}2part = 0.048). Our experimental framework provides theoretical implications for systematically assessing the generalizability of learning effects in identifying media bias. These findings also provide practical implications for developing news-reading platforms that offer bias indicators and designing media literacy curricula to enhance media bias awareness.
Problem

Research questions and friction points this paper is trying to address.

Media Bias Detection
Annotation Bias
Machine-generated Labels
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bias Annotation
Media Literacy
Machine-generated Labels
🔎 Similar Papers
No similar papers found.