FedPoisonTTP: A Threat Model and Poisoning Attack for Federated Test-Time Personalization

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work exposes a critical security vulnerability in the test-time personalization (TIP) phase of federated learning: heterogeneous domain shifts, algorithmic disparities, and limited cross-client visibility render local adaptation susceptible to data poisoning attacks. To address this, we propose the first systematic gray-box data poisoning attack model for TIP, introducing a novel poison sample generation method based on surrogate model distillation and feature consistency constraints. Our approach efficiently crafts high-entropy or confidence-mimicking poisoned inputs that evade mainstream adaptive filtering mechanisms. Extensive experiments on noisy vision benchmarks demonstrate that even a small number of compromised clients significantly degrade both global and per-client TIP performance—validating the attack’s practical feasibility and severity in real-world deployments. This work establishes the first reproducible attack paradigm and evaluation benchmark for TIP security research.

Technology Category

Application Category

📝 Abstract
Test-time personalization in federated learning enables models at clients to adjust online to local domain shifts, enhancing robustness and personalization in deployment. Yet, existing federated learning work largely overlooks the security risks that arise when local adaptation occurs at test time. Heterogeneous domain arrivals, diverse adaptation algorithms, and limited cross-client visibility create vulnerabilities where compromised participants can craft poisoned inputs and submit adversarial updates that undermine both global and per-client performance. To address this threat, we introduce FedPoisonTTP, a realistic grey-box attack framework that explores test-time data poisoning in the federated adaptation setting. FedPoisonTTP distills a surrogate model from adversarial queries, synthesizes in-distribution poisons using feature-consistency, and optimizes attack objectives to generate high-entropy or class-confident poisons that evade common adaptation filters. These poisons are injected during local adaptation and spread through collaborative updates, leading to broad degradation. Extensive experiments on corrupted vision benchmarks show that compromised participants can substantially diminish overall test-time performance.
Problem

Research questions and friction points this paper is trying to address.

Addresses security vulnerabilities in federated test-time personalization systems
Explores poisoning attacks during local adaptation in federated learning
Investigates how malicious participants degrade global and client performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Surrogate model distillation from adversarial queries
Feature-consistent in-distribution poison synthesis
High-entropy poison optimization evading adaptation filters
🔎 Similar Papers
No similar papers found.
M
Md Akil Raihan Iftee
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh
S
Syed Md. Ahnaf Hasan
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh
Amin Ahsan Ali
Amin Ahsan Ali
Independent University, Bangladesh
Machine LearningData SciencemHealth
A
AKM Mahbubur Rahman
Center for Computational & Data Sciences (CCDS), Independent University, Bangladesh
S
Sajib Mistry
Curtin University, Australia
Aneesh Krishna
Aneesh Krishna
Professor, Curtin University, Australia
Software EngineeringModel-driven Dev & EvolArtificial IntelligenceComputer VisionML