🤖 AI Summary
To address sensitive information leakage risks in process mining—particularly under sparse variant and long-tail distribution scenarios where existing differential privacy (DP) methods suffer from low utility and poor scalability—this paper proposes a novel differentially private trajectory variant generation framework integrating Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic Models (DDPMs). The method innovatively jointly models sequence structure and privacy-preserving noise injection, eliminating the need for trajectory truncation or manual synthetic variant injection. This significantly improves the privacy–utility trade-off for infrequent variants. Experiments on real-world event logs demonstrate that, under ε-differential privacy guarantees, our approach achieves a 12.7% improvement in process discovery accuracy over baseline methods and delivers substantially stronger privacy protection for sparse variants, while maintaining industrial-scale scalability and rigorous DP compliance.
📝 Abstract
In recent years, the industry has been witnessing an extended usage of process mining and automated event data analysis. Consequently, there is a rising significance in addressing privacy apprehensions related to the inclusion of sensitive and private information within event data utilized by process mining algorithms. State-of-the-art research mainly focuses on providing quantifiable privacy guarantees, e.g., via differential privacy, for trace variants that are used by the main process mining techniques, e.g., process discovery. However, privacy preservation techniques designed for the release of trace variants are still insufficient to meet all the demands of industry-scale utilization. Moreover, ensuring privacy guarantees in situations characterized by a high occurrence of infrequent trace variants remains a challenging endeavor. In this paper, we introduce two novel approaches for releasing differentially private trace variants based on trained generative models. With TraVaG, we leverage extit{Generative Adversarial Networks} (GANs) to sample from a privatized implicit variant distribution. Our second method employs extit{Denoising Diffusion Probabilistic Models} that reconstruct artificial trace variants from noise via trained Markov chains. Both methods offer industry-scale benefits and elevate the degree of privacy assurances, particularly in scenarios featuring a substantial prevalence of infrequent variants. Also, they overcome the shortcomings of conventional privacy preservation techniques, such as bounding the length of variants and introducing fake variants. Experimental results on real-life event data demonstrate that our approaches surpass state-of-the-art techniques in terms of privacy guarantees and utility preservation.