The Runtime Dimension of Ethics in Self-Adaptive Systems

📅 2026-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing adaptive systems struggle to dynamically handle conflicting, evolving, and legally constrained ethical preferences among multiple stakeholders at runtime. This work proposes modeling ethical preferences as negotiable and evolvable runtime requirements, supported by mechanisms for continuous elicitation, formal representation, and updating. By integrating ethics-driven explicit multi-agent negotiation, uncertainty handling, and compliance constraints, the approach transcends traditional static ethical rule paradigms. It is the first to enable runtime support for dynamic ethical trade-offs under multiple driving factors, clearly articulating key challenges in ethical adaptive systems and laying a theoretical foundation for building systems with dynamic ethical reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Self-adaptive systems increasingly operate in close interaction with humans, often sharing the same physical or virtual environments and making decisions with ethical implications at runtime. Current approaches typically encode ethics as fixed, rule-based constraints or as a single chosen ethical theory embedded at design time. This overlooks a fundamental property of human-system interaction settings: ethical preferences vary across individuals and groups, evolve with context, and may conflict, while still needing to remain within a legally and regulatorily defined hard-ethics envelope (e.g., safety and compliance constraints). This paper advocates a shift from static ethical rules to runtime ethical reasoning for self-adaptive systems, where ethical preferences are treated as runtime requirements that must be elicited, represented, and continuously revised as stakeholders and situations change. We argue that satisfying such requirements demands explicit ethics-based negotiation to manage ethical trade-offs among multiple humans who interact with, are represented by, or are affected by a system. We identify key challenges, ethical uncertainty, conflicts among ethical values (including human, societal, and environmental drivers), and multi-dimensional/multi-party/multi-driver negotiation, and outline research directions and questions toward ethically self-adaptive systems.
Problem

Research questions and friction points this paper is trying to address.

self-adaptive systems
runtime ethics
ethical preferences
ethical conflicts
human-system interaction
Innovation

Methods, ideas, or system contributions that make the work stand out.

runtime ethical reasoning
self-adaptive systems
ethical negotiation
dynamic ethical preferences
multi-party ethical trade-offs
🔎 Similar Papers
No similar papers found.