Efficient Generative Transformer Operators For Million-Point PDEs

📅 2025-12-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing neural operators face three key bottlenecks in PDE solving: poor scalability on dense grids, error accumulation in long-horizon temporal rollout, and weak generalization due to task coupling. This paper proposes ECHO, a generative Transformer-based operator framework that integrates a hierarchical convolutional encoder-decoder architecture with an adaptive training paradigm—mapping sparse inputs to high-resolution outputs—achieving ~100× spatiotemporal compression. By decoupling representation learning from task-specific fine-tuning, ECHO enables unified solution across diverse PDE systems. Furthermore, it incorporates both conditional and unconditional generative modeling to substantially suppress error drift. Evaluated on complex geometries, high-frequency dynamics, and long-time-domain PDEs, ECHO delivers high-fidelity simulation on million-node grids. It sets new state-of-the-art performance in computational efficiency and cross-task generalization.

Technology Category

Application Category

📝 Abstract
We introduce ECHO, a transformer-operator framework for generating million-point PDE trajectories. While existing neural operators (NOs) have shown promise for solving partial differential equations, they remain limited in practice due to poor scalability on dense grids, error accumulation during dynamic unrolling, and task-specific design. ECHO addresses these challenges through three key innovations. (i) It employs a hierarchical convolutional encode-decode architecture that achieves a 100 $ imes$ spatio-temporal compression while preserving fidelity on mesh points. (ii) It incorporates a training and adaptation strategy that enables high-resolution PDE solution generation from sparse input grids. (iii) It adopts a generative modeling paradigm that learns complete trajectory segments, mitigating long-horizon error drift. The training strategy decouples representation learning from downstream task supervision, allowing the model to tackle multiple tasks such as trajectory generation, forward and inverse problems, and interpolation. The generative model further supports both conditional and unconditional generation. We demonstrate state-of-the-art performance on million-point simulations across diverse PDE systems featuring complex geometries, high-frequency dynamics, and long-term horizons.
Problem

Research questions and friction points this paper is trying to address.

Addresses scalability issues in neural operators for dense grid PDEs
Mitigates error accumulation during dynamic unrolling of PDE trajectories
Enables high-resolution PDE solutions from sparse input grids
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical convolutional encode-decode for spatio-temporal compression
Training strategy for high-resolution generation from sparse grids
Generative modeling of trajectory segments to reduce error drift
🔎 Similar Papers
No similar papers found.
A
Armand Kassaï Koupaï
Sorbonne Université, CNRS, ISIR, 75005 Paris, France
L
Lise Le Boudec
Sorbonne Université, CNRS, ISIR, 75005 Paris, France
Patrick Gallinari
Patrick Gallinari
Professor Sorbonne University / Criteo AI Lab
Machine LearningDeep LearningPhysics-aware Deep LearningNatural Language Processing