R'enyi Differential Privacy for Heavy-Tailed SDEs via Fractional Poincar'e Inequalities

📅 2025-11-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of differential privacy (DP) analysis for stochastic gradient descent (SGD) under heavy-tailed noise—a setting where existing DP frameworks, relying on light-tailed assumptions or suffering from strong dimension dependence, fail to extend to Rényi differential privacy (RDP). We establish the first RDP guarantees for heavy-tailed stochastic differential equations (SDEs) and their discrete SGD approximations. Our method introduces fractional Poincaré inequalities, integrates Rényi divergence flow analysis with continuous–discrete joint modeling, and yields a unified privacy tracking framework. The resulting RDP bounds exhibit significantly weakened dependence on parameter dimension and eliminate the need for gradient clipping—enabling rigorous privacy control even under heavy-tailed noise. This advances DP theory by simultaneously overcoming limitations imposed by tail behavior assumptions and the scope of privacy definitions, thereby broadening the applicability of differential privacy to modern deep learning settings.

Technology Category

Application Category

📝 Abstract
Characterizing the differential privacy (DP) of learning algorithms has become a major challenge in recent years. In parallel, many studies suggested investigating the behavior of stochastic gradient descent (SGD) with heavy-tailed noise, both as a model for modern deep learning models and to improve their performance. However, most DP bounds focus on light-tailed noise, where satisfactory guarantees have been obtained but the proposed techniques do not directly extend to the heavy-tailed setting. Recently, the first DP guarantees for heavy-tailed SGD were obtained. These results provide $(0,delta)$-DP guarantees without requiring gradient clipping. Despite casting new light on the link between DP and heavy-tailed algorithms, these results have a strong dependence on the number of parameters and cannot be extended to other DP notions like the well-established R'enyi differential privacy (RDP). In this work, we propose to address these limitations by deriving the first RDP guarantees for heavy-tailed SDEs, as well as their discretized counterparts. Our framework is based on new R'enyi flow computations and the use of well-established fractional Poincar'e inequalities. Under the assumption that such inequalities are satisfied, we obtain DP guarantees that have a much weaker dependence on the dimension compared to prior art.
Problem

Research questions and friction points this paper is trying to address.

Extending Rényi differential privacy guarantees to heavy-tailed stochastic differential equations
Overcoming dimension-dependent limitations in existing heavy-tailed DP analysis
Establishing privacy bounds via fractional Poincaré inequalities without gradient clipping
Innovation

Methods, ideas, or system contributions that make the work stand out.

RDP guarantees for heavy-tailed SDEs
Using fractional Poincare inequalities framework
Weaker dimension dependence than prior methods
🔎 Similar Papers
No similar papers found.
Benjamin Dupuis
Benjamin Dupuis
PhD student, INRIA - ENS Paris
Machine learningprobability theory
M
Mert Gurbuzbalaban
Department of Management Science and Information Systems, Rutgers Business School, Piscataway, NJ 08854, United States of America
U
Umut Şimşekli
Inria, CNRS, Ecole Normale Supérieure PSL Research University, Paris, France
J
Jian Wang
School of Mathematics and Statistics, Key Laboratory of Analytical Mathematics and Applications (Ministry of Education), Fujian Provincial Key Laboratory of Statistics and Artificial Intelligence, Fujian Normal University, 350007 Fuzhou, People’s Republic of China
S
S. Yıldırım
Faculty of Engineering and Natural Sciences, Sabancı University, Istanbul, Turkey
Lingjiong Zhu
Lingjiong Zhu
Florida State University