K-DAREK: Distance Aware Error for Kurkova Kolmogorov Networks

📅 2025-10-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low approximation efficiency, poor interpretability, and lack of uncertainty quantification in Kurkova–Kolmogorov Approximation Networks (KKANs), this paper proposes K-DAREK—a semi-parametric architecture integrating Chebyshev polynomial layers, multi-layer perceptrons (MLPs), and spline transformations. Crucially, it introduces a novel distance-aware error mechanism that establishes a robust, training-set-distance-dependent error bound for test points, significantly enhancing extrapolation reliability and safety. Additionally, K-DAREK incorporates an embedded uncertainty quantification module, enabling safe modeling and control of safety-critical dynamical systems. Experiments demonstrate that K-DAREK improves safety by 50% over DAREK; achieves 4× faster inference and 10× higher computational efficiency compared to KAN ensembles; and exhibits 8.6× better scalability than Gaussian processes, with markedly improved training stability.

Technology Category

Application Category

📝 Abstract
Neural networks are parametric and powerful tools for function approximation, and the choice of architecture heavily influences their interpretability, efficiency, and generalization. In contrast, Gaussian processes (GPs) are nonparametric probabilistic models that define distributions over functions using a kernel to capture correlations among data points. However, these models become computationally expensive for large-scale problems, as they require inverting a large covariance matrix. Kolmogorov- Arnold networks (KANs), semi-parametric neural architectures, have emerged as a prominent approach for modeling complex functions with structured and efficient representations through spline layers. Kurkova Kolmogorov-Arnold networks (KKANs) extend this idea by reducing the number of spline layers in KAN and replacing them with Chebyshev layers and multi-layer perceptrons, thereby mapping inputs into higher-dimensional spaces before applying spline-based transformations. Compared to KANs, KKANs perform more stable convergence during training, making them a strong architecture for estimating operators and system modeling in dynamical systems. By enhancing the KKAN architecture, we develop a novel learning algorithm, distance-aware error for Kurkova-Kolmogorov networks (K-DAREK), for efficient and interpretable function approximation with uncertainty quantification. Our approach establishes robust error bounds that are distance-aware; this means they reflect the proximity of a test point to its nearest training points. Through case studies on a safe control task, we demonstrate that K-DAREK is about four times faster and ten times higher computationally efficiency than Ensemble of KANs, 8.6 times more scalable than GP by increasing the data size, and 50% safer than our previous work distance-aware error for Kolmogorov networks (DAREK).
Problem

Research questions and friction points this paper is trying to address.

Enhancing KKAN architecture for efficient function approximation
Developing distance-aware error bounds for uncertainty quantification
Improving computational efficiency and safety in dynamical systems modeling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Enhanced KKAN architecture with distance-aware error bounds
Combines Chebyshev layers and spline transformations for efficiency
Provides uncertainty quantification with computational scalability improvements
🔎 Similar Papers
No similar papers found.