🤖 AI Summary
Visual provenance indicators for AI-generated content (AIGC) are inaccessible to blind and low-vision (BLV) users and frequently overlooked by sighted users, undermining cross-ability awareness of AI origin. Method: We conducted semi-structured interviews with 28 BLV and sighted participants—the first dual-cohort study—to comparatively analyze current AIGC source labeling practices. Contribution/Results: Our analysis reveals systemic deficiencies in label placement consistency, metadata clarity, and interface accessibility. We find menu-based labels suffer from low visibility, whereas inline labels embedded in titles or comments significantly improve detection and comprehension. Building on these insights, we propose the first multisensory, cross-ability AIGC provenance labeling framework, grounded in empirical evidence. This yields 12 actionable design principles that enhance label discoverability, interpretability, and interactivity—advancing AIGC transparency and digital inclusion through empirically validated, implementable guidelines.
📝 Abstract
AI-Generated (AIG) content has become increasingly widespread by recent advances in generative models and the easy-to-use tools that have significantly lowered the technical barriers for producing highly realistic audio, images, and videos through simple natural language prompts. In response, platforms are adopting provable provenance with platforms recommending AIG to be self-disclosed and signaled to users. However, these indicators may be often missed, especially when they rely solely on visual cues and make them ineffective to users with different sensory abilities. To address the gap, we conducted semi-structured interviews (N=28) with 15 sighted and 13 BLV participants to examine their interaction with AIG content through self-disclosed AI indicators. Our findings reveal diverse mental models and practices, highlighting different strengths and weaknesses of content-based (e.g., title, description) and menu-aided (e.g., AI labels) indicators. While sighted participants leveraged visual and audio cues, BLV participants primarily relied on audio and existing assistive tools, limiting their ability to identify AIG. Across both groups, they frequently overlooked menu-aided indicators deployed by platforms and rather interacted with content-based indicators such as title and comments. We uncovered usability challenges stemming from inconsistent indicator placement, unclear metadata, and cognitive overload. These issues were especially critical for BLV individuals due to the insufficient accessibility of interface elements. We provide practical recommendations and design implications for future AIG indicators across several dimensions.