Energy Backdoor Attack to Deep Neural Networks

📅 2025-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces and implements the first **energy backdoor attack** targeting sparsity-oriented ASIC accelerators. Addressing the emerging threat of energy-side channel vulnerabilities in hardware-accelerated deep neural networks (DNNs), the attack employs a two-stage co-optimization framework—comprising model fine-tuning and input-trigger design—to induce significant, targeted energy surges without degrading inference accuracy (<0.5% drop). Evaluated on ResNet-18 and MobileNet-V2 deployed on CIFAR-10 and Tiny ImageNet, it achieves up to 2.3× abnormal energy consumption under specific trigger inputs. Crucially, the attack models energy response behavior based on sparse computation characteristics, eliminating reliance on runtime intervention—a key limitation of prior energy-based attacks. It thus establishes a novel “functionally stealthy yet energetically conspicuous” backdoor paradigm, revealing a previously unrecognized threat dimension: energy-efficiency security in DNN acceleration.

Technology Category

Application Category

📝 Abstract
The rise of deep learning (DL) has increased computing complexity and energy use, prompting the adoption of application specific integrated circuits (ASICs) for energy-efficient edge and mobile deployment. However, recent studies have demonstrated the vulnerability of these accelerators to energy attacks. Despite the development of various inference time energy attacks in prior research, backdoor energy attacks remain unexplored. In this paper, we design an innovative energy backdoor attack against deep neural networks (DNNs) operating on sparsity-based accelerators. Our attack is carried out in two distinct phases: backdoor injection and backdoor stealthiness. Experimental results using ResNet-18 and MobileNet-V2 models trained on CIFAR-10 and Tiny ImageNet datasets show the effectiveness of our proposed attack in increasing energy consumption on trigger samples while preserving the model's performance for clean/regular inputs. This demonstrates the vulnerability of DNNs to energy backdoor attacks. The source code of our attack is available at: https://github.com/hbrachemi/energy_backdoor.
Problem

Research questions and friction points this paper is trying to address.

Deep Learning Models
Sparse ASICs Accelerators
Backdoor Power Attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse Accelerator
Backdoor Power Attack
ASICs Deep Learning
🔎 Similar Papers
No similar papers found.
H
H. Meftah
Univ. Rennes, INSA Rennes, CNRS, IETR, UMR 6164, Rennes, France
W
W. Hamidouche
KU 6G Research Center, Department of Computer and Information Engineering, Khalifa University, Abu Dhabi, UAE
S
Sid Ahmed Fezza
National Higher School of Telecommunications and ICT, Oran, Algeria
O
Olivier D'eforges
Univ. Rennes, INSA Rennes, CNRS, IETR, UMR 6164, Rennes, France
Kassem Kallas
Kassem Kallas
Senior Scientist, Prof., HDR, PhD, SMIEEE, EMBA, at Inserm, in colab with IMT Atlantique
Adversarial Machine LearningAI SecurityCyber SecurityWatermarking