Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage

📅 2026-03-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Time series imputation models are widely deployed in healthcare, finance, and other domains, yet their black-box nature poses significant privacy risks, including membership inference and attribute leakage. This work proposes the first two-stage black-box attack framework tailored to such models: it begins with a high-precision membership inference method designed for overfitting-resistant architectures, leveraging reference models to enhance detection performance; it then introduces the first attribute inference attack in the context of time series imputation, uncovering the model’s memorization of sensitive features. Experimental results demonstrate that the proposed approach substantially outperforms baseline methods on the tpr@top25% metric and achieves 90% accuracy in predicting the success of attribute inference—compared to 78% for baselines—thereby effectively establishing a mechanistic link between membership inference and attribute leakage.

Technology Category

Application Category

📝 Abstract
Deep learning models for time series imputation are now essential in fields such as healthcare, the Internet of Things (IoT), and finance. However, their deployment raises critical privacy concerns. Beyond the well-known issue of unintended memorization, which has been extensively studied in generative models, we demonstrate that time series models are vulnerable to inference attacks in a black-box setting. In this work, we introduce a two-stage attack framework comprising: (1) a novel membership inference attack based on a reference model that improves detection accuracy, even for models robust to overfitting-based attacks, and (2) the first attribute inference attack that predicts sensitive characteristics of the training data for timeseries imputation model. We evaluate these attacks on attention-based and autoencoder architectures in two scenarios: models that are trained from scratch, and fine-tuned models where the adversary has access to the initial weights. Our experimental results demonstrate that the proposed membership attack retrieves a significant portion of the training data with a tpr@top25% score significantly higher than a naive attack baseline. We show that our membership attack also provides a good insight of whether attribute inference will work (with a precision of 90% instead of 78% in the genral case).
Problem

Research questions and friction points this paper is trying to address.

time series imputation
privacy
membership inference
attribute leakage
deep learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

membership inference
attribute leakage
time series imputation
privacy attack
reference-based attack
🔎 Similar Papers
No similar papers found.
F
Faiz Taleb
EDF SAMOVAR, Télécom SudParis, Institut Polytechnique de Paris
I
Ivan Gazeau
EDF
Maryline Laurent
Maryline Laurent
Telecom SudParis
cybersecurityprivacy enhancing technologiesdigital identityblockchain