π€ AI Summary
Learning hypergraph structures under unknown high-order relationships remains challenging due to inconsistent total variation (TV) definitions arising from diverse smoothness priors.
Method: This paper proposes a scalable temporal-signal-based hypergraph inference framework. We introduce the first unified convex optimization framework encompassing multiple hypergraph total variation formulations, coupled with a dynamic hyperedge search constraint mechanism to ensure both convergence and sparsity. The method jointly leverages hypergraph signal processing and heterogeneous smoothness prior modeling, and employs an efficient forward-backward-forward algorithm for optimization.
Results: Experiments demonstrate significant improvements in inference accuracy over state-of-the-art methods, strong robustness across varying TV terms, and scalability to large-scale graph-structured learning tasks.
π Abstract
In graph signal processing, learning the weighted connections between nodes from a set of sample signals is a fundamental task when the underlying relationships are not known a priori. This task is typically addressed by finding a graph Laplacian on which the observed signals are smooth. With the extension of graphs to hypergraphs - where edges can connect more than two nodes - graph learning methods have similarly been generalized to hypergraphs. However, the absence of a unified framework for calculating total variation has led to divergent definitions of smoothness and, consequently, differing approaches to hyperedge recovery. We confront this challenge through generalization of several previously proposed hypergraph total variations, subsequently allowing ease of substitution into a vector based optimization. To this end, we propose a novel hypergraph learning method that recovers a hypergraph topology from time-series signals based on a smoothness prior. Our approach addresses key limitations in prior works, such as hyperedge selection and convergence issues, by formulating the problem as a convex optimization solved via a forward-backward-forward algorithm, ensuring guaranteed convergence. Additionally, we introduce a process that simultaneously limits the span of the hyperedge search and maintains a valid hyperedge selection set. In doing so, our method becomes scalable in increasingly complex network structures. The experimental results demonstrate improved performance, in terms of accuracy, over other state-of-the-art hypergraph inference methods; furthermore, we empirically show our method to be robust to total variation terms, biased towards global smoothness, and scalable to larger hypergraphs.