Mirror, Mirror of the Flow: How Does Regularization Shape Implicit Bias?

📅 2025-04-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates how explicit regularization—particularly weight decay—dynamically modulates implicit bias and the geometric structure of optimization trajectories in overparameterized model training. Building upon the mirror flow framework, we establish, for the first time, a threefold regulatory mechanism of weight decay on implicit bias: bias relocation, bias type alteration, and contraction of the bias support region. Guided by this insight, we propose a dynamic “weight-decay shutoff” scheduling strategy. Our theoretical analysis encompasses diverse models—including sparse coding, matrix sensing, single-layer attention, and LoRA—unifying continuous-time optimization derivations with empirical validation. Across multiple benchmarks, the proposed strategy consistently improves generalization performance. This work advances regularization design by introducing a new paradigm that bridges rigorous theoretical foundations with practical efficacy.

Technology Category

Application Category

📝 Abstract
Implicit bias plays an important role in explaining how overparameterized models generalize well. Explicit regularization like weight decay is often employed in addition to prevent overfitting. While both concepts have been studied separately, in practice, they often act in tandem. Understanding their interplay is key to controlling the shape and strength of implicit bias, as it can be modified by explicit regularization. To this end, we incorporate explicit regularization into the mirror flow framework and analyze its lasting effects on the geometry of the training dynamics, covering three distinct effects: positional bias, type of bias, and range shrinking. Our analytical approach encompasses a broad class of problems, including sparse coding, matrix sensing, single-layer attention, and LoRA, for which we demonstrate the utility of our insights. To exploit the lasting effect of regularization and highlight the potential benefit of dynamic weight decay schedules, we propose to switch off weight decay during training, which can improve generalization, as we demonstrate in experiments.
Problem

Research questions and friction points this paper is trying to address.

How explicit regularization affects implicit bias in models
Understanding interplay between regularization and training dynamics
Exploring dynamic weight decay for better generalization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Incorporates explicit regularization into mirror flow framework
Analyzes lasting effects on training dynamics geometry
Proposes dynamic weight decay schedules for better generalization
🔎 Similar Papers