Hidden State Differential Private Mini-Batch Block Coordinate Descent for Multi-convexity Optimization

📅 2024-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing differential privacy (DP) analyses for optimization rely heavily on global convexity, rendering them inadequate for multi-convex problems under hidden-state assumptions—common in practical non-convex learning tasks. Method: We propose the first privacy-loss analysis framework tailored to multi-convex structures under hidden states, deriving tighter privacy loss bounds. The framework integrates proximal gradient updates and adaptive noise calibration, and is compatible with Mini-Batch Block Coordinate Descent. Contribution/Results: Our approach significantly improves the privacy–utility trade-off in canonical non-convex applications—including matrix factorization and neural network training—while providing rigorous theoretical guarantees for a broader class of practical models. Compared to prior work, our privacy loss bounds are strictly tighter and the framework exhibits greater generality, establishing a new paradigm for provably private optimization in non-convex settings.

Technology Category

Application Category

📝 Abstract
We investigate the differential privacy (DP) guarantees under the hidden state assumption (HSA) for multi-convex problems. Recent analyses of privacy loss under the hidden state assumption have relied on strong assumptions such as convexity, thereby limiting their applicability to practical problems. In this paper, we introduce the Differential Privacy Mini-Batch Block Coordinate Descent (DP-MBCD) algorithm, accompanied by the privacy loss accounting methods under the hidden state assumption. Our proposed methods apply to a broad range of classical non-convex problems which are or can be converted to multi-convex problems, such as matrix factorization and neural network training. In addition to a tighter bound for privacy loss, our theoretical analysis is also compatible with proximal gradient descent and adaptive calibrated noise scenarios.
Problem

Research questions and friction points this paper is trying to address.

Ensuring differential privacy in multi-convex optimization under hidden state assumptions
Extending privacy guarantees to non-convex problems via multi-convex conversion
Providing tighter privacy loss bounds for proximal and adaptive noise methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

DP-MBCD algorithm for multi-convex optimization
Privacy loss accounting under hidden state
Compatible with proximal gradient descent
🔎 Similar Papers
No similar papers found.
Ding Chen
Ding Chen
Postdoctoral Scholar, University of Texas Southwestern Medical Center
C
Chen Liu
City University of Hong Kong, Hong Kong, China