Safeguarding Autonomy: a Focus on Machine Learning Decision Systems

📅 2025-03-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
User autonomy in machine learning (ML) decision systems remains largely theoretical, with limited practical implementation across the ML lifecycle. Method: This paper introduces the first operational framework that systematically maps bioethical autonomy principles onto all stages of the ML lifecycle—data collection, model development, system deployment, and feedback-driven iteration—through normative analysis integrated with ML engineering workflows. It identifies mechanisms by which each stage may undermine user autonomy and develops a stage-specific diagnostic questionnaire and risk identification guide. Contribution/Results: The framework achieves the first principled translation of ethical autonomy norms into actionable ML engineering tools. It provides developers with a structured, phase-wise methodology for assessing autonomy impacts, thereby advancing AI governance from abstract principles toward implementable, audit-ready engineering practices. This bridges a critical gap between ethics scholarship and real-world ML system design, enabling proactive autonomy preservation throughout the development and deployment pipeline.

Technology Category

Application Category

📝 Abstract
As global discourse on AI regulation gains momentum, this paper focuses on delineating the impact of ML on autonomy and fostering awareness. Respect for autonomy is a basic principle in bioethics that establishes persons as decision-makers. While the concept of autonomy in the context of ML appears in several European normative publications, it remains a theoretical concept that has yet to be widely accepted in ML practice. Our contribution is to bridge the theoretical and practical gap by encouraging the practical application of autonomy in decision-making within ML practice by identifying the conditioning factors that currently prevent it. Consequently, we focus on the different stages of the ML pipeline to identify the potential effects on ML end-users' autonomy. To improve its practical utility, we propose a related question for each detected impact, offering guidance for identifying possible focus points to respect ML end-users autonomy in decision-making.
Problem

Research questions and friction points this paper is trying to address.

Bridging the gap between autonomy theory and ML practice
Identifying factors limiting autonomy in ML decision systems
Proposing guidance to safeguard end-user autonomy in ML
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bridge theory-practice gap in ML autonomy
Identify conditioning factors in ML pipeline
Propose questions to respect user autonomy
🔎 Similar Papers
No similar papers found.
P
Paula Sub'ias-Beltr'an
Eurecat, Centre Tecnologic de Catalunya, Barcelona, Spain; Bioethics and Law Observatory - UNESCO Chair in Bioethics, Universitat de Barcelona, Barcelona, Spain
Oriol Pujol
Oriol Pujol
Full Professor of Computer Science and Artificial Intelligence, Universitat de Barcelona
Artificial IntelligenceMachine LearningPhilosophy
I
I. Lecuona
Bioethics and Law Observatory - UNESCO Chair in Bioethics, Universitat de Barcelona, Barcelona, Spain; Dept. of Medicine, Universitat de Barcelona, Barcelona, Spain