🤖 AI Summary
This study addresses regulatory misalignment between the EU AI Act and existing medical device regulations (MDR/QSR) concerning high-risk AI systems—specifically deep learning–based automated visual inspection systems classified as Class III medical devices. Five core compliance challenges are identified: (1) conflicting risk management frameworks; (2) insufficient statistical significance in validation due to scarcity of defect samples; (3) absence of robust training data governance; (4) inadequate model interpretability for clinical traceability; and (5) lack of post-deployment monitoring mechanisms.
Method: We conduct a systematic comparative analysis of regulatory requirements and propose an integrated compliance pathway combining technical validation, data provenance, XAI-enhanced explainability, and adaptive monitoring.
Contribution: The work introduces the first actionable technical compliance framework for AI-enabled medical devices under the AI Act, while revealing a structural tension between legal obligations and current technical capabilities in cross-jurisdictional regulatory harmonization.
📝 Abstract
As deep learning (DL) technologies advance, their application in automated visual inspection for Class III medical devices offers significant potential to enhance quality assurance and reduce human error. However, the adoption of such AI-based systems introduces new regulatory complexities--particularly under the EU Artificial Intelligence (AI) Act, which imposes high-risk system obligations that differ in scope and depth from established regulatory frameworks such as the Medical Device Regulation (MDR) and the U.S. FDA Quality System Regulation (QSR). This paper presents a high-level technical assessment of the foresee-able challenges that manufacturers are likely to encounter when qualifying DL-based automated inspections within the existing medical device compliance landscape. It examines divergences in risk management principles, dataset governance, model validation, explainability requirements, and post-deployment monitoring obligations. The discussion also explores potential implementation strategies and highlights areas of uncertainty, including data retention burdens, global compliance implications, and the practical difficulties of achieving statistical significance in validation with limited defect data. Disclaimer: This publication is in-tended solely as an academic and technical evaluation. It is not a substitute for le-gal advice or official regulatory interpretation. The information presented here should not be relied upon to demonstrate compliance with the EU AI Act or any other statutory obligation. Manufacturers are encouraged to consult appropriate regulatory authorities and legal experts to determine specific compliance pathways.