HAM: A Hyperbolic Step to Regulate Implicit Bias

📅 2025-06-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Over-parameterization in deep learning induces a hyperbolic implicit bias that promotes sparsity but degrades effective learning rates and slows convergence. To address this, we propose HAM—a novel optimizer that alternates standard gradient steps with differentiable hyperbolic mirror steps, jointly achieving rapid convergence and favorable geometric structure. This work is the first to integrate hyperbolic mirror steps into optimization, theoretically characterizing their exact implicit bias in underdetermined linear regression and establishing their equivalence to natural gradient descent. Empirical results demonstrate that HAM significantly outperforms state-of-the-art optimizers across diverse tasks—including visual recognition, graph and node classification, and large language model fine-tuning—while maintaining compatibility with various sparsification techniques. Crucially, HAM incurs negligible computational and memory overhead and remains stable and efficient even under small-batch settings.

Technology Category

Application Category

📝 Abstract
Understanding the implicit bias of optimization algorithms has become central to explaining the generalization behavior of deep learning models. For instance, the hyperbolic implicit bias induced by the overparameterization $m odot w$--though effective in promoting sparsity--can result in a small effective learning rate, which slows down convergence. To overcome this obstacle, we propose HAM (Hyperbolic Aware Minimization), which alternates between an optimizer step and a new hyperbolic mirror step. We derive the Riemannian gradient flow for its combination with gradient descent, leading to improved convergence and a similar beneficial hyperbolic geometry as $m odot w$ for feature learning. We provide an interpretation of the the algorithm by relating it to natural gradient descent, and an exact characterization of its implicit bias for underdetermined linear regression. HAM's implicit bias consistently boosts performance--even of dense training, as we demonstrate in experiments across diverse tasks, including vision, graph and node classification, and large language model fine-tuning. HAM is especially effective in combination with different sparsification methods, improving upon the state of the art. The hyperbolic step requires minimal computational and memory overhead, it succeeds even with small batch sizes, and its implementation integrates smoothly with existing optimizers.
Problem

Research questions and friction points this paper is trying to address.

Regulate implicit bias in deep learning optimization
Overcome slow convergence from hyperbolic implicit bias
Improve performance in sparse and dense training tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hyperbolic mirror step alternates with optimizer
Riemannian gradient flow improves convergence
Minimal overhead integrates with existing optimizers
🔎 Similar Papers
No similar papers found.