🤖 AI Summary
Existing differentially private hyperparameter tuning methods under the white-box setting suffer from loose theoretical privacy bounds that significantly diverge from empirical observations. Method: This paper first uncovers intrinsic structural properties of the hyperparameter tuning process, breaking the tightness limitations of conventional private selection mechanisms. We propose a novel privacy analysis framework grounded in sensitivity reconstruction and white-box modeling, integrating rigorous differential privacy theory with empirical privacy auditing—without requiring additional assumptions—to precisely characterize actual privacy loss. Contribution/Results: Experiments demonstrate that our method substantially reduces privacy budget consumption (by 35–62% on average) while preserving model utility. It is broadly applicable across diverse hyperparameter spaces and model families, offering both tighter theoretical guarantees and practical tools for private machine learning.
📝 Abstract
We study the application of differential privacy in hyper-parameter tuning, a crucial process in machine learning involving selecting the best hyper-parameter from several candidates. Unlike many private learning algorithms, including the prevalent DP-SGD, the privacy implications of tuning remain insufficiently understood or often totally ignored. Recent works propose a generic private selection solution for the tuning process, yet a fundamental question persists: is this privacy bound tight? This paper provides an in-depth examination of this question. Initially, we provide studies affirming the current privacy analysis for private selection is indeed tight in general. However, when we specifically study the hyper-parameter tuning problem in a white-box setting, such tightness no longer holds. This is first demonstrated by applying privacy audit on the tuning process. Our findings underscore a substantial gap between current theoretical privacy bound and the empirical bound derived even under strong audit setups. This gap motivates our subsequent investigations. Our further study provides improved privacy results for private hyper-parameter tuning due to its distinct properties. Our results demonstrate broader applicability compared to prior analyses, which are limited to specific parameter configurations.