Getting Ready for the EU AI Act in Healthcare. A call for Sustainable AI Development and Deployment

📅 2025-05-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the compliance challenges posed by key provisions of the EU Artificial Intelligence Act—scheduled to take effect in 2026—for AI applications in healthcare. Methodologically, it integrates ethical impact assessment, systematic mapping between trustworthy AI principles and regulatory requirements, end-to-end governance design across the AI lifecycle, and development of a phased compliance roadmap. Its primary contribution lies in establishing ethical principles as the foundational interpretive and operational paradigm for regulatory implementation—shifting compliance from procedural formalism toward sustainable, practice-integrated mechanisms. The resulting framework delivers an actionable, stage-gated compliance pathway for medical AI systems, demonstrably enhancing long-term system efficacy, alignment with public interest, and regulatory agility. It thus offers both a theoretical model and empirically grounded guidance for governing high-risk AI globally.

Technology Category

Application Category

📝 Abstract
Assessments of trustworthiness have become a cornerstone of responsible AI development. Especially in high-stakes fields like healthcare, aligning technical, evidence-based, and ethical practices with forthcoming legal requirements is increasingly urgent. We argue that developers and deployers of AI systems for the medical domain should be proactive and take steps to progressively ensure that such systems, both those currently in use and those being developed or planned, respect the requirements of the AI Act, which has come into force in August 2024. This is necessary if full and effective compliance is to be ensured when the most relevant provisions of the Act become effective (August 2026). The engagement with the AI Act cannot be viewed as a formalistic exercise. Compliance with the AI Act needs to be carried out through the proactive commitment to the ethical principles of trustworthy AI. These principles provide the background for the Act, which mentions them several times and connects them to the protection of public interest. They can be used to interpret and apply the Act's provisions and to identify good practices, increasing the validity and sustainability of AI systems over time.
Problem

Research questions and friction points this paper is trying to address.

Ensuring AI systems in healthcare comply with EU AI Act
Aligning technical and ethical practices with legal requirements
Promoting sustainable and trustworthy AI development in medicine
Innovation

Methods, ideas, or system contributions that make the work stand out.

Align AI with EU AI Act requirements
Proactive ethical compliance in healthcare AI
Sustainable trustworthy AI development practices
J
John Brandt Brodersen
University of Copenhagen, Denmark & UiT The Arctic University of Norway
I
Ilaria Amelia Caggiano
Research Center in European Private Law (ReCEPL), Università degli studi Suor Orsola, Naples, Italy
P
Pedro Kringen
Arcada University of Applied Science, Helsinki, Finland
Vince Istvan Madai
Vince Istvan Madai
M.D., Ph.D., M.A., @QUEST, Charité Berlin
trustworthy AI and meta-research in healthcareAI ethicstranslation
W
Walter Osika
Karolinska Institutet, Stockholm, Sweden & Stockholm Health Care Services, Region Stockholm, Stockholm, Sweden
Giovanni Sartor
Giovanni Sartor
Università di Bologna, European University Institute
lawlegal theoryartificial intelligence
E
Ellen Svensson
Karolinska Institutet, Stockholm, Sweden & Stockholm University, Stockholm, Sweden
Magnus Westerlund
Magnus Westerlund
Arcada University of Applied Sciences
Trustworthy AIDistributed Ledger Technologysecurityblockchainautonomous agents
R
Roberto V. Zicari
Graduate School of Data Science, Seoul National University, South Korea