🤖 AI Summary
This work investigates how infinitesimal perturbations in the latent space of flow matching models propagate and influence the dependency structure among generated features. By leveraging Jacobian-vector product (JVP) analysis, the authors derive closed-form expressions for the optimal drift field and its Jacobian under Gaussian and Gaussian mixture assumptions, and construct a JVP estimator based on attribute classifiers. Theoretically, they reveal that globally nonlinear flows exhibit locally affine structures, which motivates a conditional generation strategy guided by the norm of the classifier Jacobian to effectively disentangle feature correlations and reflect underlying shared latent causes. Experiments demonstrate that numerical JVPs accurately recover analytical Jacobians on low-dimensional synthetic data, and that the proposed method successfully reconstructs empirical feature correlations while significantly reducing spurious dependencies in generated samples on MNIST and CelebA.
📝 Abstract
Flow matching learns a velocity field that transports a base distribution to data. We study how small latent perturbations propagate through these flows and show that Jacobian-vector products (JVPs) provide a practical lens on dependency structure in the generated features. We derive closed-form expressions for the optimal drift and its Jacobian in Gaussian and mixture-of-Gaussian settings, revealing that even globally nonlinear flows admit local affine structure. In low-dimensional synthetic benchmarks, numerical JVPs recover the analytical Jacobians. In image domains, composing the flow with an attribute classifier yields an attribute-level JVP estimator that recovers empirical correlations on MNIST and CelebA. Conditioning on small classifier-Jacobian norms reduces correlations in a way consistent with a hypothesized common-cause structure, while we emphasize that this conditioning is not a formal do intervention.