đ¤ AI Summary
This study examines how feedback mechanisms in large language model (LLM) interfacesâcharacterized by simplification, fragmentation, and performance-oriented designâundermine collective deliberation and deep user engagement, thereby exacerbating power asymmetries among users, the public, and AI corporations. Drawing on affordance theory (with its mechanismâcondition dual framework), participatory design theory, and empirical investigation with early adopters, the work conducts the first critical infrastructure analysis of feedback functionalities in mainstream LLM interfaces such as ChatGPT. Key contributions include: (1) identifying the latent suppressive logic through which feedback design constrains democratic participation; and (2) proposing an âinfrastructuralizationâ-oriented framework for participatory AI redesignâcentered on extensibility, visibility, and co-governanceâto advance more inclusive and reflexive LLM co-evolution.
đ Abstract
Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces, shifting the dynamics of participation in AI development. This paper examines how interactive feedback features in ChatGPTâs interface afford user participation in LLM iteration. Drawing on a survey of early ChatGPT users and applying the mechanisms and conditions framework of affordances, we analyse how these features shape user input. Our analysis indicates that these features encourage simple, frequent, and performance-focused feedback while discouraging collective input and discussions among users. Drawing on participatory design literature, we argue such constraints, if replicated across broader user bases, risk reinforcing power imbalances between users, the public, and companies developing LLMs. Our analysis contributes to the growing literature on participatory AI by critically examining the limitations of existing feedback processes and proposing directions for redesign. Rather than focusing solely on aligning model outputs with specific user preferences, we advocate for creating infrastructure that supports sustained dialogue about the purpose and applications of LLMs. This approach requires attention to the ongoing work of âinfrastructuringââcreating and sustaining the social, technical, and institutional structures necessary to address matters of concern to stakeholders impacted by LLM development and deployment.