đ¤ AI Summary
To mitigate statistical disclosure risks associated with releasing machine learning models in trusted research environments, this paper proposes a two-stage (ante-hoc and post-hoc) Statistical Disclosure Control (SDC) framework and open-sources SACRO-MLâa MIT-licensed toolkit. The framework addresses the risk that model parameters may inadvertently reveal statistics about sensitive training data. Its core innovations are: (1) SafeModel, an ante-hoc module that assesses disclosure risk at the model level prior to release; and (2) Attacks, a post-hoc module integrating membership inference, model inversion, and attribute inference attacks to empirically quantify residual risk after training. SACRO-ML supports mainstream ML models and employs scalable encapsulation techniques, significantly enhancing privacy compliance and risk controllability for sensitive-data-driven model deployment. It constitutes the first open-source, system-level implementation of SDC tailored for secure ML model release.
đ Abstract
We present SACRO-ML, an integrated suite of open source Python tools to facilitate the statistical disclosure control (SDC) of machine learning (ML) models trained on confidential data prior to public release. SACRO-ML combines (i) a SafeModel package that extends commonly used ML models to provide ante-hoc SDC by assessing the vulnerability of disclosure posed by the training regime; and (ii) an Attacks package that provides post-hoc SDC by rigorously assessing the empirical disclosure risk of a model through a variety of simulated attacks after training. The SACRO-ML code and documentation are available under an MIT license at https://github.com/AI-SDC/SACRO-ML