Field-Space Attention for Structure-Preserving Earth System Transformers

📅 2025-12-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the loss of continuity and geometric structure in latent-space mappings for geophysical field modeling, this paper introduces Field-Space Attention—a mechanism that computes attention directly on spherical continuous fields discretized via HEALPix grids, bypassing conventional implicit encoder-decoder pipelines. The method integrates non-learnable multi-scale decomposition, learnable structure-preserving deformations, geophysical prior embedding, and continuous-field attention, ensuring differential-geometric consistency and physical interpretability throughout the modeling process. Experiments on global temperature super-resolution demonstrate that the proposed approach achieves more stable convergence, substantially fewer parameters than ViT and U-Net, and improved physical fidelity—evidenced by enhanced conservation-law adherence—as well as superior statistical accuracy, with PSNR and SSIM gains of 3.2–5.8%.

Technology Category

Application Category

📝 Abstract
Accurate and physically consistent modeling of Earth system dynamics requires machine-learning architectures that operate directly on continuous geophysical fields and preserve their underlying geometric structure. Here we introduce Field-Space attention, a mechanism for Earth system Transformers that computes attention in the physical domain rather than in a learned latent space. By maintaining all intermediate representations as continuous fields on the sphere, the architecture enables interpretable internal states and facilitates the enforcement of scientific constraints. The model employs a fixed, non-learned multiscale decomposition and learns structure-preserving deformations of the input field, allowing coherent integration of coarse and fine-scale information while avoiding the optimization instabilities characteristic of standard single-scale Vision Transformers. Applied to global temperature super-resolution on a HEALPix grid, Field-Space Transformers converge more rapidly and stably than conventional Vision Transformers and U-Net baselines, while requiring substantially fewer parameters. The explicit preservation of field structure throughout the network allows physical and statistical priors to be embedded directly into the architecture, yielding improved fidelity and reliability in data-driven Earth system modeling. These results position Field-Space Attention as a compact, interpretable, and physically grounded building block for next-generation Earth system prediction and generative modeling frameworks.
Problem

Research questions and friction points this paper is trying to address.

Develops a structure-preserving attention mechanism for Earth system modeling
Enables interpretable and physically consistent representations of geophysical fields
Improves stability and efficiency in global temperature super-resolution tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Field-Space attention operates in physical domain not latent space
Model uses fixed multiscale decomposition for stable optimization
Architecture preserves continuous field structure for interpretable states
🔎 Similar Papers
No similar papers found.
M
Maximilian Witte
German Climate Computing Center, Bundesstrasse 45a, Hamburg, 20146, Hamburg, Country.
Johannes Meuer
Johannes Meuer
Kühne Logistics University, ETH Zurich
Corporate Sustainabilityconfigurational comparative methods
É
Étienne Plésiat
German Climate Computing Center, Bundesstrasse 45a, Hamburg, 20146, Hamburg, Country.
C
Christopher Kadow
German Climate Computing Center, Bundesstrasse 45a, Hamburg, 20146, Hamburg, Country.