AEGIS: An Operational Infrastructure for Post-Market Governance of Adaptive Medical AI Under US and EU Regulations

📅 2026-03-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the critical regulatory challenge of safely and compliantly iterating adaptive medical AI systems post-deployment under FDA and EU AI Act requirements. It proposes AEGIS, a unified governance framework that operationalizes the FDA’s Pre-Specified Change Control Plan (PCCP) and Article 43(4) of the EU AI Act into an executable workflow across diverse clinical settings. AEGIS integrates three core modules—data assimilation and retraining, model monitoring, and conditional decision-making—and supports multimodal medical data. Innovatively, it introduces four deployment decision mechanisms alongside an independent ALARM signal to detect high-risk scenarios where no valid model is available and the active model has failed. In sepsis prediction simulations, AEGIS successfully executed 11 iterative cycles, proactively identified distributional shifts, and exercised all decision types, demonstrating both safety and generalizability.

Technology Category

Application Category

📝 Abstract
Machine learning systems deployed in medical devices require governance frameworks that ensure safety while enabling continuous improvement. Regulatory bodies including the FDA and European Union have introduced mechanisms such as the Predetermined Change Control Plan (PCCP) and Post-Market Surveillance (PMS) to manage iterative model updates without repeated submissions. This paper presents AI/ML Evaluation and Governance Infrastructure for Safety (AEGIS), a governance framework applicable to any healthcare AI system. AEGIS comprises three modules, i.e., dataset assimilation and retraining, model monitoring, and conditional decision, that operationalize FDA PCCP and EU AI Act Article 43(4) provisions. We implement a four-category deployment decision taxonomy (APPROVE, CONDITIONAL APPROVAL, CLINICAL REVIEW, REJECT) with an independent PMS ALARM signal, enabling detection of the critical state in which no deployable model exists while the released model is simultaneously at risk. To illustrate how AEGIS can be instantiated across heterogeneous clinical contexts, we provide two examples: sepsis prediction from electronic health records and brain tumor segmentation from medical imaging. Both cases use identical governance architecture, differing only in configuration. Across 11 simulated iterations on the sepsis example, AEGIS yielded 8 APPROVE, 1 CONDITIONAL APPROVAL, 1 CLINICAL REVIEW, and 1 REJECT decision, exercising all four categories. ALARM signals were co-issued at iterations 8 and 10, including the critical state where no deployable model exists and the released model is simultaneously failing. AEGIS detected drift before observable performance degradation. These results demonstrate that AEGIS translates regulatory change-control concepts into executable governance procedures, supporting safe continuous learning for adaptive medical AI across diverse clinical applications.
Problem

Research questions and friction points this paper is trying to address.

post-market governance
adaptive medical AI
regulatory compliance
continuous learning
model updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

AEGIS
adaptive medical AI
post-market governance
Predetermined Change Control Plan
model drift detection
🔎 Similar Papers
No similar papers found.