🤖 AI Summary
Model selection in network reconstruction often suffers from overfitting or underfitting due to difficulty in adaptively determining the optimal number of edges and their weights. To address this, we propose a nonparametric regularization framework grounded in the Minimum Description Length (MDL) principle—marking the first approach to decouple sparsity induction from weight shrinkage. Our method employs hierarchical Bayesian inference coupled with weight quantization, enabling automatic, unbiased model selection without cross-validation. Evaluated on both synthetic and real-world networks, it achieves significant gains in reconstruction accuracy. We successfully apply it to reconstruct large-scale microbial interaction networks comprising 10⁴–10⁵ species, further demonstrating its utility in assessing intervention effects and predicting system critical transitions. The framework provides a scalable, statistically robust paradigm for inferring dynamic networks at scale.
📝 Abstract
A fundamental problem associated with the task of network reconstruction from dynamical or behavioral data consists in determining the most appropriate model complexity in a manner that prevents overfitting and produces an inferred network with a statistically justifiable number of edges and their weight distribution. The status quo in this context is based on L1 regularization combined with cross-validation. However, besides its high computational cost, this commonplace approach unnecessarily ties the promotion of sparsity, i.e., abundance of zero weights, with weight “shrinkage.” This combination forces a trade-off between the bias introduced by shrinkage and the network sparsity, which often results in substantial overfitting even after cross-validation. In this work, we propose an alternative nonparametric regularization scheme based on hierarchical Bayesian inference and weight quantization, which does not rely on weight shrinkage to promote sparsity. Our approach follows the minimum description length principle, and uncovers the weight distribution that allows for the most compression of the data, thus avoiding overfitting without requiring cross-validation. The latter property renders our approach substantially faster and simpler to employ, as it requires a single fit to the complete data, instead of many fits for multiple data splits and choice of regularization parameter. As a result, we have a principled and efficient inference scheme that can be used with a large variety of generative models, without requiring the number of reconstructed edges and their weight distribution to be known in advance. In a series of examples, we also demonstrate that our scheme yields systematically increased accuracy in the reconstruction of both artificial and empirical networks. We highlight the use of our method with the reconstruction of interaction networks between microbial communities from large-scale abundance samples involving on the order of 104–105 species and demonstrate how the inferred model can be used to predict the outcome of potential interventions and tipping points in the system.
Published by the American Physical Society
2025