🤖 AI Summary
This paper identifies a novel form of harm—“aspirational harm”—where AI systems systematically constrain individuals’ and groups’ practical imagination of their own potential and alternative futures by shaping shared cultural interpretive resources. Method: Drawing on conceptual analysis, critical technical practice, and cross-case comparative reasoning (non-empirical), the study integrates insights from cognitive science, philosophy, and Science and Technology Studies (STS) to develop a theoretical lens for assessing AI’s cultural impacts. Contribution/Results: It introduces two original concepts—“aspirational affordance” and “aspirational harm”—to move beyond conventional AI ethics frameworks centered on representational bias and distributive injustice. The analysis identifies three distinct risk mechanisms: systemic, covert, and centralized intervention in imagination. It argues that aspirational harm must be recognized as an independent dimension within AI governance frameworks, necessitating new evaluative criteria and regulatory attention to AI’s constitutive role in shaping cultural cognition and future-oriented agency.
📝 Abstract
As artificial intelligence systems increasingly permeate processes of cultural and epistemic production, there are growing concerns about how their outputs may confine individuals and groups to static or restricted narratives about who or what they could be. In this paper, we advance the discourse surrounding these concerns by making three contributions. First, we introduce the concept of aspirational affordance to describe how culturally shared interpretive resources can shape individual cognition, and in particular exercises practical imagination. We show how this concept can ground productive evaluations of the risks of AI-enabled representations and narratives. Second, we provide three reasons for scrutinizing of AI's influence on aspirational affordances: AI's influence is potentially more potent, but less public than traditional sources; AI's influence is not simply incremental, but ecological, transforming the entire landscape of cultural and epistemic practices that traditionally shaped aspirational affordances; and AI's influence is highly concentrated, with a few corporate-controlled systems mediating a growing portion of aspirational possibilities. Third, to advance such a scrutiny, we introduce the concept of aspirational harm, which, in the context of AI systems, arises when AI-enabled aspirational affordances distort or diminish available interpretive resources in ways that undermine individuals' ability to imagine relevant practical possibilities and alternative futures. Through three case studies, we illustrate how aspirational harms extend the existing discourse on AI-inflicted harms beyond representational and allocative harms, warranting separate attention. Through these conceptual resources and analyses, this paper advances understanding of the psychological and societal stakes of AI's role in shaping individual and collective aspirations.