Cauchy Random Features for Operator Learning in Sobolev Space

📅 2025-03-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Learning operators between infinite-dimensional Sobolev spaces remains challenging: existing neural network approaches lack theoretical convergence guarantees, while classical kernel methods suffer from high computational cost and heavy reliance on GPU acceleration. Method: We introduce Cauchy random features—novel in operator learning—to construct an efficient randomized kernel approximation algorithm within the Sobolev space framework. Contribution/Results: Our method establishes the first provable generalization error bound for infinite-dimensional operator learning via random features. It achieves rigorous theoretical guarantees while drastically reducing computational overhead: training is CPU-only and significantly faster than both kernel methods and neural networks. Empirically, it attains comparable or superior test accuracy across multiple benchmark problems. Crucially, this work presents the first random-feature-based scheme for operator learning that simultaneously ensures controllable approximation error, low computational complexity (linear in sample size), and full independence from GPU hardware.

Technology Category

Application Category

📝 Abstract
Operator learning is the approximation of operators between infinite dimensional Banach spaces using machine learning approaches. While most progress in this area has been driven by variants of deep neural networks such as the Deep Operator Network and Fourier Neural Operator, the theoretical guarantees are often in the form of a universal approximation property. However, the existence theorems do not guarantee that an accurate operator network is obtainable in practice. Motivated by the recent kernel-based operator learning framework, we propose a random feature operator learning method with theoretical guarantees and error bounds. The random feature method can be viewed as a randomized approximation of a kernel method, which significantly reduces the computation requirements for training. We provide a generalization error analysis for our proposed random feature operator learning method along with comprehensive numerical results. Compared to kernel-based method and neural network methods, the proposed method can obtain similar or better test errors across benchmarks examples with significantly reduced training times. An additional advantages it that our implementation is simple and does require costly computational resources, such as GPU.
Problem

Research questions and friction points this paper is trying to address.

Approximates operators in infinite dimensional Banach spaces
Reduces computational requirements for training
Provides theoretical guarantees and error bounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Random feature operator learning method
Reduced computation requirements for training
Simple implementation without costly resources
🔎 Similar Papers
No similar papers found.