🤖 AI Summary
Traditional adaptive mesh refinement (AMR) for PDE solving suffers from low efficiency and limited accuracy—vertex optimization relies on costly auxiliary equations, isotropic refinement hinders geometric alignment, and error estimation is computationally expensive. Existing machine learning approaches address only *h*- or *r*-adaptivity in isolation.
Method: We propose the first unified *h*- and *r*-adaptive optimization framework, integrating a hypergraph neural network with multi-agent reinforcement learning. We formulate an agent-heterogeneous Markov decision process (MDP) to theoretically guarantee untangled, anisotropic vertex movement and introduce a finite-element-error-driven reward mechanism coupling local refinement and global mesh quality.
Contribution/Results: Experiments demonstrate a 6–10× reduction in approximation error at equal element count, significantly improved mesh quality, and markedly enhanced capability to resolve geometric features with anisotropic alignment.
📝 Abstract
Adaptive mesh refinement is central to the efficient solution of partial differential equations (PDEs) via the finite element method (FEM). Classical $r$-adaptivity optimizes vertex positions but requires solving expensive auxiliary PDEs such as the Monge-Ampère equation, while classical $h$-adaptivity modifies topology through element subdivision but suffers from expensive error indicator computation and is constrained by isotropic refinement patterns that impose accuracy ceilings. Combined $hr$-adaptive techniques naturally outperform single-modality approaches, yet inherit both computational bottlenecks and the restricted cost-accuracy trade-off. Emerging machine learning methods for adaptive mesh refinement seek to overcome these limitations, but existing approaches address $h$-adaptivity or $r$-adaptivity in isolation. We present HypeR, a deep reinforcement learning framework that jointly optimizes mesh relocation and refinement. HypeR casts the joint adaptation problem using tools from hypergraph neural networks and multi-agent reinforcement learning. Refinement is formulated as a heterogeneous multi-agent Markov decision process (MDP) where element agents decide discrete refinement actions, while relocation follows an anisotropic diffusion-based policy on vertex agents with provable prevention of mesh tangling. The reward function combines local and global error reduction to promote general accuracy. Across benchmark PDEs, HypeR reduces approximation error by up to 6--10$ imes$ versus state-of-art $h$-adaptive baselines at comparable element counts, breaking through the uniform refinement accuracy ceiling that constrains subdivision-only methods. The framework produces meshes with improved shape metrics and alignment to solution anisotropy, demonstrating that jointly learned $hr$-adaptivity strategies can substantially enhance the capabilities of automated mesh generation.