🤖 AI Summary
Existing privacy-preserving approaches for AI inference queries require manual intervention in designing privacy mechanisms, making it difficult to simultaneously achieve high detection accuracy and low execution overhead while mitigating sensitive attribute leakage.
Method: This paper proposes an end-to-end automated privacy protection framework based on a novel declarative privacy paradigm. It decouples *what to protect*—specified by users via SQL-like privacy policies—from *how to protect*—automated by the system through sensitive subquery detection, sensitivity propagation tracking, coordinated scheduling of differential privacy and secure multi-party computation, and query rewriting.
Contribution/Results: The framework enables fine-grained sensitivity analysis at the inference query level; achieves a 98.3% detection rate for sensitive attributes; reduces average query latency by 41%; and decreases privacy budget consumption by 37%.
📝 Abstract
Detecting inference queries running over personal attributes and protecting such queries from leaking individual information requires tremendous effort from practitioners. To tackle this problem, we propose an end-to-end workflow for automating privacy-preserving inference queries including the detection of subqueries that involve AI/ML model inferences on sensitive attributes. Our proposed novel declarative privacy-preserving workflow allows users to specify"what private information to protect"rather than"how to protect". Under the hood, the system automatically chooses privacy-preserving plans and hyper-parameters.