🤖 AI Summary
Traditional total variation (TV) regularization is constrained by pixel grids, limiting its applicability to irregular data such as point clouds and spatial transcriptomics, while suffering from inadequate detail preservation and limited multi-directional modeling capacity. This paper introduces NeurTV—a TV regularization framework defined over neural implicit continuous domains—where local structures are modeled via derivatives of coordinate-based deep networks, eliminating reliance on discrete differences and structured grids. Its core contribution is the first generalization of TV to neural continuous domains, removing discretization errors inherent in finite differences, supporting arbitrary sampling geometries, directional derivatives of any order, and theoretically unifying classical TV with novel spatially variant variants. Experiments demonstrate that NeurTV significantly improves reconstruction accuracy and structural fidelity in color/hyperspectral image restoration, point cloud denoising, and spatial transcriptomics super-resolution, validating its effectiveness across both grid-structured and unstructured data.
📝 Abstract
Recently, we have witnessed the success of total variation (TV) for many imaging applications. However, traditional TV is defined on the original pixel domain, which limits its potential. In this work, we suggest a new TV regularization defined on the neural domain. Concretely, the discrete data is implicitly and continuously represented by a deep neural network (DNN), and we use the derivatives of DNN outputs w.r.t. input coordinates to capture local correlations of data. As compared with classical TV on the original domain, the proposed TV on the neural domain (termed NeurTV) enjoys the following advantages. First, NeurTV is free of discretization error induced by the discrete difference operator. Second, NeurTV is not limited to meshgrid but is suitable for both meshgrid and non-meshgrid data. Third, NeurTV can more exactly capture local correlations across data for any direction and any order of derivatives attributed to the implicit and continuous nature of neural domain. We theoretically reinterpret NeurTV under the variational approximation framework, which allows us to build the connection between NeurTV and classical TV and inspires us to develop variants (e.g., space-variant NeurTV). Extensive numerical experiments with meshgrid data (e.g., color and hyperspectral images) and non-meshgrid data (e.g., point clouds and spatial transcriptomics) showcase the effectiveness of the proposed methods.