🤖 AI Summary
This work addresses the challenge that physics-constrained consistency models are prone to degenerate solutions or distributional distortions during training due to interference from PDE residuals. To mitigate this, the authors propose a structure-preserving two-stage training strategy: first learning the data distribution, then freezing the decoder and introducing a two-step residual objective to fine-tune physical consistency. This approach effectively decouples distribution modeling from physics-aware optimization. By integrating consistency models, physics-informed neural networks, and projection-based zero-shot correction, the method achieves high-fidelity results in both unconditional generation and forward problem solving. It reduces computational cost by several orders of magnitude compared to diffusion model baselines while maintaining comparable accuracy.
📝 Abstract
We propose a physics-informed consistency modeling framework for solving partial differential equations (PDEs) via fast, few-step generative inference. We identify a key stability challenge in physics-constrained consistency training, where PDE residuals can drive the model toward trivial or degenerate solutions, degrading the learned data distribution. To address this, we introduce a structure-preserving two-stage training strategy that decouples distribution learning from physics enforcement by freezing the coefficient decoder during physics-informed fine-tuning. We further propose a two-step residual objective that enforces physical consistency on refined, structurally valid generative trajectories rather than noisy single-step predictions. The resulting framework enables stable, high-fidelity inference for both unconditional generation and forward problems. We demonstrate that forward solutions can be obtained via a projection-based zero-shot inpainting procedure, achieving consistent accuracy of diffusion baselines with orders of magnitude reduction in computational cost.