Signals of Provenance: Practices&Challenges of Navigating Indicators in AI-Generated Media for Sighted and Blind Individuals

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Visual provenance indicators for AI-generated content (AIGC) are inaccessible to blind and low-vision (BLV) users and frequently overlooked by sighted users, undermining cross-ability awareness of AI origin. Method: We conducted semi-structured interviews with 28 BLV and sighted participants—the first dual-cohort study—to comparatively analyze current AIGC source labeling practices. Contribution/Results: Our analysis reveals systemic deficiencies in label placement consistency, metadata clarity, and interface accessibility. We find menu-based labels suffer from low visibility, whereas inline labels embedded in titles or comments significantly improve detection and comprehension. Building on these insights, we propose the first multisensory, cross-ability AIGC provenance labeling framework, grounded in empirical evidence. This yields 12 actionable design principles that enhance label discoverability, interpretability, and interactivity—advancing AIGC transparency and digital inclusion through empirically validated, implementable guidelines.

Technology Category

Application Category

📝 Abstract
AI-Generated (AIG) content has become increasingly widespread by recent advances in generative models and the easy-to-use tools that have significantly lowered the technical barriers for producing highly realistic audio, images, and videos through simple natural language prompts. In response, platforms are adopting provable provenance with platforms recommending AIG to be self-disclosed and signaled to users. However, these indicators may be often missed, especially when they rely solely on visual cues and make them ineffective to users with different sensory abilities. To address the gap, we conducted semi-structured interviews (N=28) with 15 sighted and 13 BLV participants to examine their interaction with AIG content through self-disclosed AI indicators. Our findings reveal diverse mental models and practices, highlighting different strengths and weaknesses of content-based (e.g., title, description) and menu-aided (e.g., AI labels) indicators. While sighted participants leveraged visual and audio cues, BLV participants primarily relied on audio and existing assistive tools, limiting their ability to identify AIG. Across both groups, they frequently overlooked menu-aided indicators deployed by platforms and rather interacted with content-based indicators such as title and comments. We uncovered usability challenges stemming from inconsistent indicator placement, unclear metadata, and cognitive overload. These issues were especially critical for BLV individuals due to the insufficient accessibility of interface elements. We provide practical recommendations and design implications for future AIG indicators across several dimensions.
Problem

Research questions and friction points this paper is trying to address.

Identifying AI-generated content accessibility for sighted and blind users
Evaluating effectiveness of visual and non-visual AI disclosure indicators
Addressing usability challenges in AI provenance signaling across diverse users
Innovation

Methods, ideas, or system contributions that make the work stand out.

Semi-structured interviews with sighted and BLV participants
Examined content-based and menu-aided AI indicators
Proposed design implications for accessible AIG indicators
🔎 Similar Papers
No similar papers found.
A
Ayae Ide
Pennsylvania State University, USA
T
Tory Park
Pennsylvania State University, USA
J
Jaron Mink
Arizona State University, USA
Tanusree Sharma
Tanusree Sharma
Assistant Professor, Penn State University
Security and PrivacyAI GovernanceHuman-Computer InteractionSocial Computing