🤖 AI Summary
The deployment of AI in high-stakes domains such as healthcare and finance raises pressing ethical challenges—including data ownership disputes, privacy breaches, and systemic bias—necessitating rigorous, interoperable governance mechanisms. Method: This paper proposes a modular AI ethics assessment framework centered on ontologically grounded, semantically precise ethical units (e.g., fairness, accountability, data ownership). It introduces an “ontological blocks” paradigm for formalizing ethical principles and enabling cross-system interoperability, and uniquely integrates the FAIR data principles into AI ethics evaluation while aligning with the EU AI Act to support behavior-driven dynamic risk classification. Contribution/Results: Evaluated in an AI-powered investor profiling use case, the framework enables real-time, granular ethical risk identification and tiered assessment, substantially enhancing transparency, regulatory traceability, and decision-making explainability, verifiability, and auditability.
📝 Abstract
Artificial Intelligence (AI) is transforming sectors such as healthcare, finance, and autonomous systems, offering powerful tools for innovation. Yet its rapid integration raises urgent ethical concerns related to data ownership, privacy, and systemic bias. Issues like opaque decision-making, misleading outputs, and unfair treatment in high-stakes domains underscore the need for transparent and accountable AI systems. This article addresses these challenges by proposing a modular ethical assessment framework built on ontological blocks of meaning-discrete, interpretable units that encode ethical principles such as fairness, accountability, and ownership. By integrating these blocks with FAIR (Findable, Accessible, Interoperable, Reusable) principles, the framework supports scalable, transparent, and legally aligned ethical evaluations, including compliance with the EU AI Act. Using a real-world use case in AI-powered investor profiling, the paper demonstrates how the framework enables dynamic, behavior-informed risk classification. The findings suggest that ontological blocks offer a promising path toward explainable and auditable AI ethics, though challenges remain in automation and probabilistic reasoning.