Learning AI Auditing: A Case Study of Teenagers Auditing a Generative AI Model

📅 2025-08-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates how high school students identify algorithmic bias in everyday AI tools—such as TikTok Effect House—through participatory algorithmic auditing. Employing a multi-source qualitative methodology—including participatory design workshops, observational analysis of generative AI outputs, and user feedback—triangulated via methodological triangulation, the research enabled 14 non-expert adolescents to systematically conduct audits. It marks the first instance of youth independently auditing AI systems across race, gender, and—critically—the underexamined dimension of age, thereby introducing a novel perspective on age-related bias long overlooked in professional auditing practice. Findings reveal significant representational disparities in model outputs, with student-identified biases strongly aligning with expert analyses, confirming adolescents’ capacity for rigorous, effective AI auditing. The work advances AI literacy education and offers a replicable, inclusive public engagement framework for algorithmic governance.

Technology Category

Application Category

📝 Abstract
This study investigates how high school-aged youth engage in algorithm auditing to identify and understand biases in artificial intelligence and machine learning (AI/ML) tools they encounter daily. With AI/ML technologies being increasingly integrated into young people's lives, there is an urgent need to equip teenagers with AI literacies that build both technical knowledge and awareness of social impacts. Algorithm audits (also called AI audits) have traditionally been employed by experts to assess potential harmful biases, but recent research suggests that non-expert users can also participate productively in auditing. We conducted a two-week participatory design workshop with 14 teenagers (ages 14-15), where they audited the generative AI model behind TikTok's Effect House, a tool for creating interactive TikTok filters. We present a case study describing how teenagers approached the audit, from deciding what to audit to analyzing data using diverse strategies and communicating their results. Our findings show that participants were engaged and creative throughout the activities, independently raising and exploring new considerations, such as age-related biases, that are uncommon in professional audits. We drew on our expertise in algorithm auditing to triangulate their findings as a way to examine if the workshop supported participants to reach coherent conclusions in their audit. Although the resulting number of changes in race, gender, and age representation uncovered by the teens were slightly different from ours, we reached similar conclusions. This study highlights the potential for auditing to inspire learning activities to foster AI literacies, empower teenagers to critically examine AI systems, and contribute fresh perspectives to the study of algorithmic harms.
Problem

Research questions and friction points this paper is trying to address.

How teenagers audit AI biases in daily tools
Developing AI literacy and social impact awareness in youth
Non-experts effectively participating in AI algorithm audits
Innovation

Methods, ideas, or system contributions that make the work stand out.

Teenagers audit TikTok's AI for biases
Participatory design workshop with youth
Non-experts identify age-related AI biases
🔎 Similar Papers
No similar papers found.