Data Poisoning Vulnerabilities Across Healthcare AI Architectures: A Security Threat Analysis

📅 2025-11-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Medical AI systems face severe data poisoning threats, with current defenses and regulatory frameworks critically lagging: adversaries achieve high attack success rates across CNNs, LLMs, reinforcement learning, and federated learning using only 100–500 poisoned samples, while detection often takes over six months. Distributed healthcare infrastructure, privacy regulations inadvertently shielding attackers, and single-point supply chain vulnerabilities—potentially impacting 50–200 institutions—further exacerbate systemic risk. This study employs threat modeling, adversarial simulation, federated risk analysis, privacy mechanism evaluation, and supply chain tracing to systematically identify eight low-barrier, high-impact attack vectors. It is the first to expose systemic security blind spots in medical AI architecture design, clinical workflows, and supply chains. The work demonstrates that black-box models are unsuitable for high-stakes clinical decision-making and advocates for the co-deployment of explainable AI frameworks and internationally harmonized adversarial robustness standards.

Technology Category

Application Category

📝 Abstract
Healthcare AI systems face major vulnerabilities to data poisoning that current defenses and regulations cannot adequately address. We analyzed eight attack scenarios in four categories: architectural attacks on convolutional neural networks, large language models, and reinforcement learning agents; infrastructure attacks exploiting federated learning and medical documentation systems; critical resource allocation attacks affecting organ transplantation and crisis triage; and supply chain attacks targeting commercial foundation models. Our findings indicate that attackers with access to only 100-500 samples can compromise healthcare AI regardless of dataset size, often achieving over 60 percent success, with detection taking an estimated 6 to 12 months or sometimes not occurring at all. The distributed nature of healthcare infrastructure creates many entry points where insiders with routine access can launch attacks with limited technical skill. Privacy laws such as HIPAA and GDPR can unintentionally shield attackers by restricting the analyses needed for detection. Supply chain weaknesses allow a single compromised vendor to poison models across 50 to 200 institutions. The Medical Scribe Sybil scenario shows how coordinated fake patient visits can poison data through legitimate clinical workflows without requiring a system breach. Current regulations lack mandatory adversarial robustness testing, and federated learning can worsen risks by obscuring attribution. We recommend multilayer defenses including required adversarial testing, ensemble-based detection, privacy-preserving security mechanisms, and international coordination on AI security standards. We also question whether opaque black-box models are suitable for high-stakes clinical decisions, suggesting a shift toward interpretable systems with verifiable safety guarantees.
Problem

Research questions and friction points this paper is trying to address.

Healthcare AI systems face data poisoning vulnerabilities that current defenses cannot adequately address
Attackers with minimal data access can compromise AI models with high success rates and delayed detection
Distributed healthcare infrastructure and privacy laws create multiple attack entry points that shield attackers
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed data poisoning attacks across multiple AI architectures
Proposed multilayer defenses with adversarial testing and ensembles
Suggested interpretable systems over black-box models for safety
🔎 Similar Papers
No similar papers found.
F
F. Abtahi
Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm 17177, Sweden
Fernando Seoane
Fernando Seoane
Full Professor/Senior Lecturer, University of Borås / Karolinska Institute
Electrical BioimpedanceBiomedical EngineeringBiomedical InstrumentationBiomedical Signal ProcessingWearable Sensors
I
Iván Pau
ETSIS de Telecomunicación, Universidad Politécnica de Madrid, Calle Nikola Tesla S/N, 28031 Madrid, Spain
M
Mario Vega-Barbas
ETSIS de Telecomunicación, Universidad Politécnica de Madrid, Calle Nikola Tesla S/N, 28031 Madrid, Spain