🤖 AI Summary
This paper addresses sparse learning in the absence of prior group information. We propose an adaptive sparse method grounded in network heat diffusion dynamics. Our approach models implicit variable group structures via Laplacian geometry, enabling continuous interpolation between Lasso and Group Lasso through data-driven graph construction and diffusion-time tuning—automatically adapting to varying group structure strength without pre-clustering. The penalty term incurs only logarithmic computational complexity in dimensionality. We establish statistical consistency under broad conditions, with explicit upper bounds on sample complexity. Empirical validation is conducted within statistical physics frameworks—including Gaussian free fields and stochastic block models—demonstrating efficacy in recovering structured sparsity.
📝 Abstract
Group or cluster structure on explanatory variables in machine learning problems is a very general phenomenon, which has attracted broad interest from practitioners and theoreticians alike. In this work we contribute an approach to sparse learning under such group structure, that does not require prior information on the group identities. Our paradigm is motivated by the Laplacian geometry of an underlying network with a related community structure, and proceeds by directly incorporating this into a penalty that is effectively computed via a heat-flow-based local network dynamics. The proposed penalty interpolates between the lasso and the group lasso penalties, the runtime of the heat-flow dynamics being the interpolating parameter. As such it can automatically default to lasso when the group structure reflected in the Laplacian is weak. In fact, we demonstrate a data-driven procedure to construct such a network based on the available data. Notably, we dispense with computationally intensive pre-processing involving clustering of variables, spectral or otherwise. Our technique is underpinned by rigorous theorems that guarantee its effective performance and provide bounds on its sample complexity. In particular, in a wide range of settings, it provably suffices to run the diffusion for time that is only logarithmic in the problem dimensions. We explore in detail the interfaces of our approach with key statistical physics models in network science, such as the Gaussian Free Field and the Stochastic Block Model. Our work raises the possibility of applying similar diffusion-based techniques to classical learning tasks, exploiting the interplay between geometric, dynamical and stochastic structures underlying the data.