🤖 AI Summary
This paper identifies a novel existential risk—“human mechanization”—arising from AI’s accelerating advancement: the systemic, irreversible erosion of human autonomy as AI progressively outperforms humans in core domains including decision-making, creativity, caregiving, and leadership. Unlike dominant paradigms centered on AI misalignment or loss of control, this work conceptualizes autonomy degradation itself as a distinct existential threat. Methodologically, it integrates philosophical analysis, cognitive science theory, and sociotechnical modeling of AGI impacts, introducing the “gradual power delegation” framework and empirically grounding an early-warning indicator system for autonomy decline based on skill plasticity. The study shifts the focus of AI governance from safety alignment toward human capability preservation, offering theoretical foundations and policy levers to enhance resilience in education, labor systems, and democratic institutions.
📝 Abstract
AI risks are typically framed around physical threats to humanity, a loss of control or an accidental error causing humanity's extinction. However, I argue in line with the gradual disempowerment thesis, that there is an underappreciated risk in the slow and irrevocable decline of human autonomy. As AI starts to outcompete humans in various areas of life, a tipping point will be reached where it no longer makes sense to rely on human decision-making, creativity, social care or even leadership. What may follow is a process of gradual de-skilling, where we lose skills that we currently take for granted. Traditionally, it is argued that AI will gain human skills over time, and that these skills are innate and immutable in humans. By contrast, I argue that humans may lose such skills as critical thinking, decision-making and even social care in an AGI world. The biggest threat to humanity is therefore not that machines will become more like humans, but that humans will become more like machines.