🤖 AI Summary
Existing differential privacy (DP) analyses for optimization rely heavily on global convexity, rendering them inadequate for multi-convex problems under hidden-state assumptions—common in practical non-convex learning tasks.
Method: We propose the first privacy-loss analysis framework tailored to multi-convex structures under hidden states, deriving tighter privacy loss bounds. The framework integrates proximal gradient updates and adaptive noise calibration, and is compatible with Mini-Batch Block Coordinate Descent.
Contribution/Results: Our approach significantly improves the privacy–utility trade-off in canonical non-convex applications—including matrix factorization and neural network training—while providing rigorous theoretical guarantees for a broader class of practical models. Compared to prior work, our privacy loss bounds are strictly tighter and the framework exhibits greater generality, establishing a new paradigm for provably private optimization in non-convex settings.
📝 Abstract
We investigate the differential privacy (DP) guarantees under the hidden state assumption (HSA) for multi-convex problems. Recent analyses of privacy loss under the hidden state assumption have relied on strong assumptions such as convexity, thereby limiting their applicability to practical problems. In this paper, we introduce the Differential Privacy Mini-Batch Block Coordinate Descent (DP-MBCD) algorithm, accompanied by the privacy loss accounting methods under the hidden state assumption. Our proposed methods apply to a broad range of classical non-convex problems which are or can be converted to multi-convex problems, such as matrix factorization and neural network training. In addition to a tighter bound for privacy loss, our theoretical analysis is also compatible with proximal gradient descent and adaptive calibrated noise scenarios.