🤖 AI Summary
ERGMs suffer from intractable normalizing constants, rendering conventional MCMC-based maximum likelihood estimation computationally expensive and inherently sequential. To address this, we propose the first end-to-end, deep invertible neural network framework that directly learns a bijective mapping between model parameters and network sufficient statistics—bypassing MCMC sampling entirely. Our method employs supervised learning to jointly train both forward (parameter → statistic) and inverse (statistic → parameter) mappings, accommodating arbitrary differentiable statistics and improving robustness to model misspecification. Evaluated across diverse network datasets, it achieves estimation accuracy comparable to MCMC-MLE while reducing inference latency by one to two orders of magnitude, enabling real-time and batch inference. The core contribution is the first application of invertible neural networks to ERGM estimation, establishing a sampling-free, scalable, and interpretable paradigm. (149 words)
📝 Abstract
Exponential random graph models (ERGMs) are very flexible for modeling network formation but pose difficult estimation challenges due to their intractable normalizing constant. Existing methods, such as MCMC-MLE, rely on sequential simulation at every optimization step. We propose a neural network approach that trains on a single, large set of parameter-simulation pairs to learn the mapping from parameters to average network statistics. Once trained, this map can be inverted, yielding a fast and parallelizable estimation method. The procedure also accommodates extra network statistics to mitigate model misspecification. Some simple illustrative examples show that the method performs well in practice.