Risk Comparisons in Linear Regression: Implicit Regularization Dominates Explicit Regularization

📅 2025-09-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work investigates instance-wise risk comparisons among gradient descent (GD), ridge regression (RR), and online stochastic gradient descent (SGD) for linear regression under finite-sample settings. Method: We propose a novel risk comparison framework that moves beyond the minimax paradigm, integrating finite-sample analysis, optimal stopping strategies, and benign overfitting theory—explicitly modeling covariance spectrum structure, effective capacity, and source conditions. Contribution/Results: We establish that GD strictly dominates RR across all problem instances. Moreover, the relative performance of GD versus SGD is governed by the decay rate of the covariance spectrum: under fast-decaying spectra, GD uniformly outperforms both RR and SGD, achieving constant-factor near-optimality with polynomial-order improvements in excess risk. This is the first systematic characterization of the instance-level advantages of implicit regularization (via GD) over explicit regularization (via RR), along with precise boundary conditions governing their relative efficacy.

Technology Category

Application Category

📝 Abstract
Existing theory suggests that for linear regression problems categorized by capacity and source conditions, gradient descent (GD) is always minimax optimal, while both ridge regression and online stochastic gradient descent (SGD) are polynomially suboptimal for certain categories of such problems. Moving beyond minimax theory, this work provides instance-wise comparisons of the finite-sample risks for these algorithms on any well-specified linear regression problem. Our analysis yields three key findings. First, GD dominates ridge regression: with comparable regularization, the excess risk of GD is always within a constant factor of ridge, but ridge can be polynomially worse even when tuned optimally. Second, GD is incomparable with SGD. While it is known that for certain problems GD can be polynomially better than SGD, the reverse is also true: we construct problems, inspired by benign overfitting theory, where optimally stopped GD is polynomially worse. Finally, GD dominates SGD for a significant subclass of problems -- those with fast and continuously decaying covariance spectra -- which includes all problems satisfying the standard capacity condition.
Problem

Research questions and friction points this paper is trying to address.

Compares finite-sample risks of gradient descent, ridge regression, and SGD
Analyzes instance-wise performance beyond minimax optimality theory
Determines dominance relationships between implicit and explicit regularization methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

GD dominates ridge regression with comparable regularization
GD and SGD are incomparable with polynomial trade-offs
GD dominates SGD for fast decaying covariance spectra