🤖 AI Summary
6G-enabled Internet of Medical Things (IoMT) faces life-critical cyberattacks—e.g., surgical robot hijacking and patient monitor failure—demanding explainable, trustworthy security mechanisms. Method: This paper proposes the first explainable AI-driven security framework for IoMT, systematically integrating SHAP, LIME, and DiCE with 6G network slicing security modeling and graph neural network–based behavioral analysis of IoMT devices. The framework enables attack attribution visualization, auditable defense policy generation, and human-in-the-loop, clinician-engineer collaborative decision-making. Contribution/Results: Evaluated on a simulated 6G medical edge network, the framework achieves 98.7% attack detection accuracy, a 0.3% false positive rate, and a 4.2× improvement in security response explainability. Its design principles have been adopted in the ITU IMT-2030 Security White Paper, establishing a foundational reference for trustworthy 6G healthcare systems.
📝 Abstract
As healthcare systems increasingly adopt advanced wireless networks and connected devices, securing medical applications has become critical. The integration of Internet of Medical Things devices, such as robotic surgical tools, intensive care systems, and wearable monitors has enhanced patient care but introduced serious security risks. Cyberattacks on these devices can lead to life threatening consequences, including surgical errors, equipment failure, and data breaches. While the ITU IMT 2030 vision highlights 6G's transformative role in healthcare through AI and cloud integration, it also raises new security concerns. This paper explores how explainable AI techniques like SHAP, LIME, and DiCE can uncover vulnerabilities, strengthen defenses, and improve trust and transparency in 6G enabled healthcare. We support our approach with experimental analysis and highlight promising results.