🤖 AI Summary
This work addresses the numerical challenges in hypergraph $p$-Laplacian regularization for data interpolation and semi-supervised learning—namely, non-smoothness of the objective functional and non-uniqueness of minimizers. We first rigorously derive a variational equation based on the subdifferential of the $p$-Laplacian energy and construct a mathematically well-posed, computationally efficient simplified model. This model circumvents both non-smoothness and solution non-uniqueness, effectively suppressing spurious spikes and oscillations in interpolation while improving classification accuracy in semi-supervised tasks. It exhibits low time complexity and strong scalability. Our core contribution is the systematic integration of subdifferential analysis into hypergraph signal processing, yielding the first numerically implementable $p$-Laplacian framework for large-scale hypergraph learning that simultaneously ensures theoretical rigor and practical efficiency.
📝 Abstract
Hypergraph learning with $p$-Laplacian regularization has attracted a lot of attention due to its flexibility in modeling higher-order relationships in data. This paper focuses on its fast numerical implementation, which is challenging due to the non-differentiability of the objective function and the non-uniqueness of the minimizer. We derive a hypergraph $p$-Laplacian equation from the subdifferential of the $p$-Laplacian regularization. A simplified equation that is mathematically well-posed and computationally efficient is proposed as an alternative. Numerical experiments verify that the simplified $p$-Laplacian equation suppresses spiky solutions in data interpolation and improves classification accuracy in semi-supervised learning. The remarkably low computational cost enables further applications.