🤖 AI Summary
This work addresses the longstanding challenge of simultaneously achieving theoretical tractability and strong empirical performance in generative modeling and unpaired distribution translation. We propose Electrostatic Field Matching (EFM), a novel framework inspired by capacitor physics: source and target distributions are modeled as parallel plates carrying opposite charges; a neural network learns the associated electrostatic field, and samples are transported along field lines via gradient-driven numerical integration. EFM is the first method to formalize electrostatic principles into a differentiable, provably grounded distribution translation paradigm—guaranteeing optimality under the Monge–Kantorovich optimal transport framework. Key innovations include capacitive geometric constraints, field-line integration guided by potential gradients, and neural electrostatic field modeling. On synthetic benchmarks and cross-domain image translation tasks (e.g., horse ↔ zebra), EFM consistently outperforms state-of-the-art unsupervised matching methods, delivering both high-fidelity generation and rigorous theoretical guarantees.
📝 Abstract
We propose Electrostatic Field Matching (EFM), a novel method that is suitable for both generative modeling and distribution transfer tasks. Our approach is inspired by the physics of an electrical capacitor. We place source and target distributions on the capacitor plates and assign them positive and negative charges, respectively. We then learn the electrostatic field of the capacitor using a neural network approximator. To map the distributions to each other, we start at one plate of the capacitor and move the samples along the learned electrostatic field lines until they reach the other plate. We theoretically justify that this approach provably yields the distribution transfer. In practice, we demonstrate the performance of our EFM in toy and image data experiments.