🤖 AI Summary
This work addresses the performance limitations of neural Koopman operators in modeling and controlling nonlinear robotic systems, which stem from an unclear trade-off between dataset size and latent space dimensionality. The study presents the first rigorous derivation of an upper bound on the Koopman approximation error, decomposing it into sampling and projection errors. Building on this decomposition, the authors establish a quantitative scaling law that relates sample complexity, latent dimension, and control performance. Guided by this theoretical insight, they propose two lightweight regularization strategies—covariance loss and inverse control loss—to optimize the allocation of data and model resources. Experiments across six robotic environments validate the derived scaling law and demonstrate that the proposed methods significantly improve both dynamic modeling accuracy and closed-loop control performance.
📝 Abstract
Data-driven neural Koopman operator theory has emerged as a powerful tool for linearizing and controlling nonlinear robotic systems. However, the performance of these data-driven models fundamentally depends on the trade-off between sample size and model dimensions, a relationship for which the scaling laws have remained unclear. This paper establishes a rigorous framework to address this challenge by deriving and empirically validating scaling laws that connect sample size, latent space dimension, and downstream control quality. We derive a theoretical upper bound on the Koopman approximation error, explicitly decomposing it into sampling error and projection error. We show that these terms decay at specific rates relative to dataset size and latent dimension, providing a rigorous basis for the scaling law. Based on the theoretical results, we introduce two lightweight regularizers for the neural Koopman operator: a covariance loss to help stabilize the learned latent features and an inverse control loss to ensure the model aligns with physical actuation. The results from systematic experiments across six robotic environments confirm that model fitting error follows the derived scaling laws, and the regularizers improve dynamic model fitting fidelity, with enhanced closed-loop control performance. Together, our results provide a simple recipe for allocating effort between data collection and model capacity when learning Koopman dynamics for control.