A Nearly Optimal Single Loop Algorithm for Stochastic Bilevel Optimization under Unbounded Smoothness

📅 2024-12-28
🏛️ International Conference on Machine Learning
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses stochastic bilevel optimization problems in meta-learning where the upper-level objective is nonconvex and the lower-level objective is strongly convex—exemplified by RNN-based text classification. Method: We propose SLIP, the first single-loop bilevel optimizer that eliminates reliance on traditional nested loops. SLIP integrates normalized momentum SGD, Hessian-vector product estimation via implicit differentiation, and high-probability convergence analysis. Contribution/Results: We establish a novel theoretical connection between bilevel optimization and stochastic optimization under distributional shift. Theoretically, SLIP achieves an oracle complexity of Õ(1/ε⁴) for gradient and Hessian-vector queries, while guaranteeing both expected and high-probability convergence. Empirically, SLIP significantly outperforms existing baselines across multiple meta-learning benchmarks, demonstrating both theoretical rigor and practical efficiency.

Technology Category

Application Category

📝 Abstract
This paper studies the problem of stochastic bilevel optimization where the upper-level function is nonconvex with potentially unbounded smoothness and the lower-level function is strongly convex. This problem is motivated by meta-learning applied to sequential data, such as text classification using recurrent neural networks, where the smoothness constant of the upper-level loss function scales linearly with the gradient norm and can be potentially unbounded. Existing algorithm crucially relies on the nested loop design, which requires significant tuning efforts and is not practical. In this paper, we address this issue by proposing a Single Loop bIlevel oPtimizer (SLIP). The proposed algorithm first updates the lower-level variable by a few steps of stochastic gradient descent, and then simultaneously updates the upper-level variable by normalized stochastic gradient descent with momentum and the lower-level variable by stochastic gradient descent. Under standard assumptions, we show that our algorithm finds an $epsilon$-stationary point within $widetilde{O}(1/epsilon^4)$footnote{Here $widetilde{O}(cdot)$ compresses logarithmic factors of $1/epsilon$ and $1/delta$, where $deltain(0,1)$ denotes the failure probability.} oracle calls of stochastic gradient or Hessian-vector product, both in expectation and with high probability. This complexity result is nearly optimal up to logarithmic factors without mean-square smoothness of the stochastic gradient oracle. Our proof relies on (i) a refined characterization and control of the lower-level variable and (ii) establishing a novel connection between bilevel optimization and stochastic optimization under distributional drift. Our experiments on various tasks show that our algorithm significantly outperforms strong baselines in bilevel optimization.
Problem

Research questions and friction points this paper is trying to address.

Bi-level Optimization
Meta Learning
Recurrent Neural Networks
Innovation

Methods, ideas, or system contributions that make the work stand out.

SLIP Method
Stochastic Bilevel Optimization
Gradient Descent
Xiaochuan Gong
Xiaochuan Gong
George Mason University
J
Jie Hao
Department of Computer Science, George Mason University, USA
M
Mingrui Liu
Department of Computer Science, George Mason University, USA