🤖 AI Summary
To address the loss of continuity and geometric structure in latent-space mappings for geophysical field modeling, this paper introduces Field-Space Attention—a mechanism that computes attention directly on spherical continuous fields discretized via HEALPix grids, bypassing conventional implicit encoder-decoder pipelines. The method integrates non-learnable multi-scale decomposition, learnable structure-preserving deformations, geophysical prior embedding, and continuous-field attention, ensuring differential-geometric consistency and physical interpretability throughout the modeling process. Experiments on global temperature super-resolution demonstrate that the proposed approach achieves more stable convergence, substantially fewer parameters than ViT and U-Net, and improved physical fidelity—evidenced by enhanced conservation-law adherence—as well as superior statistical accuracy, with PSNR and SSIM gains of 3.2–5.8%.
📝 Abstract
Accurate and physically consistent modeling of Earth system dynamics requires machine-learning architectures that operate directly on continuous geophysical fields and preserve their underlying geometric structure. Here we introduce Field-Space attention, a mechanism for Earth system Transformers that computes attention in the physical domain rather than in a learned latent space. By maintaining all intermediate representations as continuous fields on the sphere, the architecture enables interpretable internal states and facilitates the enforcement of scientific constraints. The model employs a fixed, non-learned multiscale decomposition and learns structure-preserving deformations of the input field, allowing coherent integration of coarse and fine-scale information while avoiding the optimization instabilities characteristic of standard single-scale Vision Transformers. Applied to global temperature super-resolution on a HEALPix grid, Field-Space Transformers converge more rapidly and stably than conventional Vision Transformers and U-Net baselines, while requiring substantially fewer parameters. The explicit preservation of field structure throughout the network allows physical and statistical priors to be embedded directly into the architecture, yielding improved fidelity and reliability in data-driven Earth system modeling. These results position Field-Space Attention as a compact, interpretable, and physically grounded building block for next-generation Earth system prediction and generative modeling frameworks.