Michscan: Black-Box Neural Network Integrity Checking at Runtime Through Power Analysis

📅 2025-01-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Ensuring runtime integrity of black-box TinyML models deployed on resource-constrained edge devices—without access to model parameters or developer cooperation—remains an open challenge. Method: This paper proposes a lightweight, side-channel–based integrity verification framework that combines correlation power analysis (CPA) with the non-parametric Mann–Whitney U test. Operating entirely in black-box mode, it requires only five power traces per inference to detect model tampering in real time. Contribution/Results: Evaluated on an STM32F303RC (ARM Cortex-M4) platform, the method achieves 100% detection rate against three representative integrity attacks (e.g., weight poisoning, architecture substitution, and backdoor injection), with a false positive rate below 10⁻⁵. It incurs negligible inference overhead and minimal memory footprint (<2 KB RAM). To our knowledge, this is the first practical, side-channel–driven integrity assurance solution tailored for TinyML deployments under stringent resource constraints.

Technology Category

Application Category

📝 Abstract
As neural networks are increasingly used for critical decision-making tasks, the threat of integrity attacks, where an adversary maliciously alters a model, has become a significant security and safety concern. These concerns are compounded by the use of licensed models, where end-users purchase third-party models with only black-box access to protect model intellectual property (IP). In such scenarios, conventional approaches to verify model integrity require knowledge of model parameters or cooperative model owners. To address this challenge, we propose Michscan, a methodology leveraging power analysis to verify the integrity of black-box TinyML neural networks designed for resource-constrained devices. Michscan is based on the observation that modifications to model parameters impact the instantaneous power consumption of the device. We leverage this observation to develop a runtime model integrity-checking methodology that employs correlational power analysis using a golden template or signature to mathematically quantify the likelihood of model integrity violations at runtime through the Mann-Whitney U-Test. Michscan operates in a black-box environment and does not require a cooperative or trustworthy model owner. We evaluated Michscan using an STM32F303RC microcontroller with an ARM Cortex-M4 running four TinyML models in the presence of three model integrity violations. Michscan successfully detected all integrity violations at runtime using power data from five inferences. All detected violations had a negligible probability P<10^(-5) of being produced from an unmodified model (i.e., false positive).
Problem

Research questions and friction points this paper is trying to address.

Neural Network Integrity
Unauthorized Modification Detection
TinyML Security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Michscan
Neural Network Integrity
Energy Usage Analysis
🔎 Similar Papers
No similar papers found.
R
Robi Paul
Rochester Institute of Technology, Rochester, NY USA
Michael Zuzak
Michael Zuzak
Assistant Professor of Computer Engineering, Rochester Institute of Technology