Flat Minima and Generalization: Insights from Stochastic Convex Optimization

📅 2025-11-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the relationship between flatness of minima and generalization performance under the classical setting of stochastic convex optimization with non-negative β-smooth objectives. Method: We conduct a rigorous theoretical analysis of generalization error for flat versus sharp empirical minimizers, and derive upper bounds on the generalization error for two sharpness-aware optimization algorithms—Sharpness-Aware Gradient Descent (SA-GD) and Sharpness-Aware Minimization (SAM). Contribution/Results: We prove that flat empirical minimizers do not guarantee good generalization—their population risk can be as high as Ω(1)—while sharp minimizers may achieve optimal generalization. Moreover, we provide the first generalization error upper bounds for SA-GD and SAM, showing neither algorithm provably improves generalization: SA-GD converges to flat solutions yet incurs Ω(1) population risk; SAM may converge to sharp solutions with similarly degraded generalization. These results challenge the widespread heuristic “flatness implies good generalization,” offering critical counterexamples and precise boundary characterizations for the theoretical foundations of sharpness-aware optimization.

Technology Category

Application Category

📝 Abstract
Understanding the generalization behavior of learning algorithms is a central goal of learning theory. A recently emerging explanation is that learning algorithms are successful in practice because they converge to flat minima, which have been consistently associated with improved generalization performance. In this work, we study the link between flat minima and generalization in the canonical setting of stochastic convex optimization with a non-negative, $eta$-smooth objective. Our first finding is that, even in this fundamental and well-studied setting, flat empirical minima may incur trivial $Omega(1)$ population risk while sharp minima generalizes optimally. Then, we show that this poor generalization behavior extends to two natural''sharpness-aware''algorithms originally proposed by Foret et al. (2021), designed to bias optimization toward flat solutions: Sharpness-Aware Gradient Descent (SA-GD) and Sharpness-Aware Minimization (SAM). For SA-GD, which performs gradient steps on the maximal loss in a predefined neighborhood, we prove that while it successfully converges to a flat minimum at a fast rate, the population risk of the solution can still be as large as $Omega(1)$, indicating that even flat minima found algorithmically using a sharpness-aware gradient method might generalize poorly. For SAM, a computationally efficient approximation of SA-GD based on normalized ascent steps, we show that although it minimizes the empirical loss, it may converge to a sharp minimum and also incur population risk $Omega(1)$. Finally, we establish population risk upper bounds for both SA-GD and SAM using algorithmic stability techniques.
Problem

Research questions and friction points this paper is trying to address.

Investigating flat minima's generalization link in convex optimization
Analyzing poor generalization of sharpness-aware gradient descent methods
Establishing risk bounds for flat minima seeking algorithms
Innovation

Methods, ideas, or system contributions that make the work stand out.

Analyzed flat minima generalization in convex optimization
Proved sharpness-aware algorithms may generalize poorly
Established population risk bounds via stability techniques
🔎 Similar Papers
No similar papers found.
M
Matan Schliserman
Blavatnik School of Computer Science and AI, Tel Aviv University
S
Shira Vansover-Hager
Blavatnik School of Computer Science and AI, Tel Aviv University
Tomer Koren
Tomer Koren
Associate Professor at Tel Aviv University
Machine LearningOptimizationReinforcement Learning