Vector-Valued Reproducing Kernel Banach Spaces for Neural Networks and Operators

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing studies primarily focus on scalar-valued neural networks and their connections to reproducing kernel Banach spaces (RKBSs), leaving the functional space structure of vector-valued neural networks (e.g., ℝᵈ-valued networks, DeepONets, hypernetworks) and neural operators within RKBS frameworks systematically uncharacterized. Method: We propose a general framework of vector-valued reproducing kernel Banach spaces (vv-RKBSs), establishing a rigorous theory without requiring symmetry, finite-dimensional output, or separability assumptions—naturally encompassing reproducing kernels and generalizing vv-RKHS properties. Leveraging integral operator analysis and representation theorems, we unify diverse non-scalar architectures. Contribution/Results: We prove that shallow ℝᵈ-valued networks, DeepONets, and hypernetworks all reside in specific vv-RKBSs, and derive representation theorems for their associated optimization problems—thereby providing a solid functional-analytic foundation for neural operator learning.

Technology Category

Application Category

📝 Abstract
Recently, there has been growing interest in characterizing the function spaces underlying neural networks. While shallow and deep scalar-valued neural networks have been linked to scalar-valued reproducing kernel Banach spaces (RKBS), $R^d$-valued neural networks and neural operator models remain less understood in the RKBS setting. To address this gap, we develop a general definition of vector-valued RKBS (vv-RKBS), which inherently includes the associated reproducing kernel. Our construction extends existing definitions by avoiding restrictive assumptions such as symmetric kernel domains, finite-dimensional output spaces, reflexivity, or separability, while still recovering familiar properties of vector-valued reproducing kernel Hilbert spaces (vv-RKHS). We then show that shallow $R^d$-valued neural networks are elements of a specific vv-RKBS, namely an instance of the integral and neural vv-RKBS. To also explore the functional structure of neural operators, we analyze the DeepONet and Hypernetwork architectures and demonstrate that they too belong to an integral and neural vv-RKBS. In all cases, we establish a Representer Theorem, showing that optimization over these function spaces recovers the corresponding neural architectures.
Problem

Research questions and friction points this paper is trying to address.

Extending RKBS theory to vector-valued neural networks
Developing general vector-valued RKBS without restrictive assumptions
Establishing Representer Theorems for neural operators and architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Defined vector-valued RKBS with reproducing kernels
Extended vv-RKBS to neural networks and operators
Established Representer Theorem for neural architectures
🔎 Similar Papers
No similar papers found.