Conflicting Biases at the Edge of Stability: Norm versus Sharpness Regularization

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates the implicit regularization mechanism underlying gradient descent’s generalization in overparameterized neural networks, focusing on the dynamic trade-off between parameter norm minimization and loss landscape sharpness (i.e., Hessian-based curvature) minimization. Method: Moving beyond isolated analyses of individual implicit biases, we theoretically and empirically examine how the learning rate—operating within the edge-of-stability regime—actively reconciles this tension. We analyze diagonal linear networks, derive analytical results for regression tasks, and quantify sharpness via the Hessian’s eigenvalues. Contribution/Results: We prove that optimizing *either* norm or sharpness alone yields suboptimal generalization; instead, moderate learning rates achieve an effective equilibrium between them. Extensive experiments validate this mechanism, and we propose a novel theoretical framework modeling their dynamic interplay. Our findings provide a unified explanation for the generalization benefits of large-step-size training in overparameterized settings.

Technology Category

Application Category

📝 Abstract
A widely believed explanation for the remarkable generalization capacities of overparameterized neural networks is that the optimization algorithms used for training induce an implicit bias towards benign solutions. To grasp this theoretically, recent works examine gradient descent and its variants in simplified training settings, often assuming vanishing learning rates. These studies reveal various forms of implicit regularization, such as $ell_1$-norm minimizing parameters in regression and max-margin solutions in classification. Concurrently, empirical findings show that moderate to large learning rates exceeding standard stability thresholds lead to faster, albeit oscillatory, convergence in the so-called Edge-of-Stability regime, and induce an implicit bias towards minima of low sharpness (norm of training loss Hessian). In this work, we argue that a comprehensive understanding of the generalization performance of gradient descent requires analyzing the interaction between these various forms of implicit regularization. We empirically demonstrate that the learning rate balances between low parameter norm and low sharpness of the trained model. We furthermore prove for diagonal linear networks trained on a simple regression task that neither implicit bias alone minimizes the generalization error. These findings demonstrate that focusing on a single implicit bias is insufficient to explain good generalization, and they motivate a broader view of implicit regularization that captures the dynamic trade-off between norm and sharpness induced by non-negligible learning rates.
Problem

Research questions and friction points this paper is trying to address.

Analyzes interaction between norm and sharpness regularization in neural networks
Examines learning rate's role in balancing parameter norm and sharpness
Demonstrates single implicit bias insufficient for good generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Balancing norm and sharpness via learning rate
Analyzing implicit regularization interactions
Dynamic trade-off between norm and sharpness
🔎 Similar Papers
No similar papers found.
V
Vit Fojtik
Department of Mathematics, LMU Munich; Munich Center for Machine Learning (MCML)
M
Maria Matveev
Department of Mathematics, LMU Munich; Munich Center for Machine Learning (MCML); Konrad Zuse School of Excellence in Reliable AI
Hung-Hsu Chou
Hung-Hsu Chou
University of Pittsburgh
Machine LearningOptimizationCompressed SensingImplicit Regularization
Gitta Kutyniok
Gitta Kutyniok
Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence, LMU Munich
Applied Harmonic AnalysisArtificial IntelligenceData ScienceImaging ScienceInverse Problems
Johannes Maly
Johannes Maly
Ludwig-Maximilians-Universität München