Unlearning for One-Step Generative Models via Unbalanced Optimal Transport

📅 2026-03-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of machine unlearning in one-step generative models, where existing methods relying on multi-step denoising architectures are inapplicable. To this end, we propose UOT-Unlearn, the first framework to incorporate unbalanced optimal transport (UOT) into unlearning for one-step generators. By balancing an unlearning cost against an f-divergence regularization term, our approach smoothly redistributes the probability mass of the target class across the remaining classes. This strategy effectively avoids generating low-quality or noisy samples. Extensive experiments on CIFAR-10 and ImageNet-256 demonstrate that UOT-Unlearn significantly outperforms current baselines, achieving state-of-the-art performance in both unlearning success rate (measured by PUL) and preservation of generation quality (measured by u-FID).

Technology Category

Application Category

📝 Abstract
Recent advances in one-step generative frameworks, such as flow map models, have significantly improved the efficiency of image generation by learning direct noise-to-data mappings in a single forward pass. However, machine unlearning for ensuring the safety of these powerful generators remains entirely unexplored. Existing diffusion unlearning methods are inherently incompatible with these one-step models, as they rely on a multi-step iterative denoising process. In this work, we propose UOT-Unlearn, a novel plug-and-play class unlearning framework for one-step generative models based on the Unbalanced Optimal Transport (UOT). Our method formulates unlearning as a principled trade-off between a forget cost, which suppresses the target class, and an $f$-divergence penalty, which preserves overall generation fidelity via relaxed marginal constraints. By leveraging UOT, our method enables the probability mass of the forgotten class to be smoothly redistributed to the remaining classes, rather than collapsing into low-quality or noise-like samples. Experimental results on CIFAR-10 and ImageNet-256 demonstrate that our framework achieves superior unlearning success (PUL) and retention quality (u-FID), significantly outperforming baselines.
Problem

Research questions and friction points this paper is trying to address.

machine unlearning
one-step generative models
image generation safety
class unlearning
generative model security
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unbalanced Optimal Transport
Machine Unlearning
One-Step Generative Models
Flow Map Models
f-divergence
🔎 Similar Papers
No similar papers found.