SLIM: Stochastic Learning and Inference in Overidentified Models

📅 2025-10-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the computational inefficiency and poor scalability of nonlinear Generalized Method of Moments (GMM) in large-scale overidentified models, this paper proposes SLIM: a stochastic learning framework that iteratively estimates parameters using random minibatches of moment conditions—without requiring consistent initial estimators or global convexity assumptions, and supporting both fixed-sample and random-sampling asymptotics. Innovatively, we develop a family of Sargan–Hansen *J*-tests tailored to stochastic learning, and integrate stochastic approximation, minibatch gradient updates, randomized scaling, and debiased plug-in estimation, enabling compatibility with both first- and second-order optimization. In an EASI demand system featuring 576 moment conditions, 380 parameters, and 10⁵ observations, SLIM completes estimation and inference in just 1.4 hours—12× faster than conventional full-sample GMM—while scaling smoothly to million-scale datasets, thereby achieving both statistical rigor and computational scalability.

Technology Category

Application Category

📝 Abstract
We propose SLIM (Stochastic Learning and Inference in overidentified Models), a scalable stochastic approximation framework for nonlinear GMM. SLIM forms iterative updates from independent mini-batches of moments and their derivatives, producing unbiased directions that ensure almost-sure convergence. It requires neither a consistent initial estimator nor global convexity and accommodates both fixed-sample and random-sampling asymptotics. We further develop an optional second-order refinement and inference procedures based on random scaling and plug-in methods, including plug-in, debiased plug-in, and online versions of the Sargan--Hansen $J$-test tailored to stochastic learning. In Monte Carlo experiments based on a nonlinear EASI demand system with 576 moment conditions, 380 parameters, and $n = 10^5$, SLIM solves the model in under 1.4 hours, whereas full-sample GMM in Stata on a powerful laptop converges only after 18 hours. The debiased plug-in $J$-test delivers satisfactory finite-sample inference, and SLIM scales smoothly to $n = 10^6$.
Problem

Research questions and friction points this paper is trying to address.

SLIM enables scalable stochastic learning for nonlinear GMM models
It eliminates the need for consistent initial estimators or global convexity
The framework provides efficient inference procedures for large-scale problems
Innovation

Methods, ideas, or system contributions that make the work stand out.

Stochastic approximation framework for nonlinear GMM
Iterative updates from mini-batches ensure convergence
Second-order refinement and inference procedures developed
🔎 Similar Papers
No similar papers found.
X
Xiaohong Chen
Department of Economics, Yale University and Cowles Foundation for Research in Economics
M
Min Seong Kim
Department of Economics, University of Connecticut
S
Sokbae Lee
Department of Economics, Columbia University and Centre for Microdata Methods and Practice
Myung Hwan Seo
Myung Hwan Seo
Seoul National University
EconomicsEconometricsStatistics
M
Myunghyun Song
Department of Economics, Columbia University