Paper 'A Tight Context-aware Privacy Bound for Histogram Publication' accepted for publication in IEEE Signal Processing Letters.
Paper 'Privacy Mechanism Design based on Empirical Distributions' accepted to CSF 2026.
Paper 'Bounds on the privacy amplification of arbitrary channels via the contraction of fα-divergence' accepted to Allerton 2025.
Co-supervised PhD student Leonhard Grosse successfully defended his licentiate thesis.
Part of the program committee for APVP 2025, the 15th Atelier sur la Protection de la Vie Privée.
Paper 'Rethinking Disclosure Prevention with Pointwise Maximal Leakage' accepted to the Journal of Privacy and Confidentiality.
Invited to speak at Digitalize in Stockholm 2024, participating in the panel 'To Legislate or Not to Legislate the AI Realm' and sharing thoughts on the newly implemented AI Act and its impacts on research, innovation, and society.
Paper 'Extremal Mechanisms for Pointwise Maximal Leakage' published in IEEE Transactions on Information Forensics and Security.
Presented paper 'Evaluating Differential Privacy on Correlated Datasets Using Pointwise Maximal Leakage' at the 2024 Annual Privacy Forum.
Received a grant from the Swedish Research Council (VR) to support postdoc at Inria.
Research Experience
Postdoctoral researcher at Inria Saclay, Comète team, supported by a Swedish Research Council (VR) postdoctoral fellowship.
Education
Received PhD in February 2024 from KTH Royal Institute of Technology. Introduced a new privacy measure called pointwise maximal leakage (PML) during her PhD. PML is part of the quantitative information flow definitions and is provably more general than differential privacy.
Background
Research interests: trustworthy machine learning, with a particular focus on privacy and fairness. Developing mathematically rigorous frameworks to analyze the privacy and fairness guarantees of algorithms and understand their fundamental trade-offs.