Diagonalizing the Softmax: Hadamard Initialization for Tractable Cross-Entropy Dynamics

📅 2025-12-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The theoretical understanding of cross-entropy (CE) loss optimization in non-convex deep learning remains limited, especially regarding global dynamics. Method: Focusing on the minimal non-convex setting—two-layer linear networks with standard basis inputs—we rigorously analyze the CE gradient flow. We discover that Hadamard initialization diagonalizes the softmax operator and freezes singular vectors of the weight matrices; leveraging this, we construct an explicit Lyapunov function. Contribution/Results: We establish the first global convergence guarantee for CE gradient flow to neural collapse—a geometric configuration where features collapse onto class means, inter-class means become orthogonal and equidistant, and intra-class variance vanishes. This result breaks prior reliance on squared loss or convexity assumptions, providing the first non-convex, multi-class, globally convergent theory for CE optimization. It further reveals the critical role of implicit regularization in realistic training dynamics.

Technology Category

Application Category

📝 Abstract
Cross-entropy (CE) training loss dominates deep learning practice, yet existing theory often relies on simplifications, either replacing it with squared loss or restricting to convex models, that miss essential behavior. CE and squared loss generate fundamentally different dynamics, and convex linear models cannot capture the complexities of non-convex optimization. We provide an in-depth characterization of multi-class CE optimization dynamics beyond the convex regime by analyzing a canonical two-layer linear neural network with standard-basis vectors as inputs: the simplest non-convex extension for which the implicit bias remained unknown. This model coincides with the unconstrained features model used to study neural collapse, making our work the first to prove that gradient flow on CE converges to the neural collapse geometry. We construct an explicit Lyapunov function that establishes global convergence, despite the presence of spurious critical points in the non-convex landscape. A key insight underlying our analysis is an inconspicuous finding: Hadamard Initialization diagonalizes the softmax operator, freezing the singular vectors of the weight matrices and reducing the dynamics entirely to their singular values. This technique opens a pathway for analyzing CE training dynamics well beyond our specific setting considered here.
Problem

Research questions and friction points this paper is trying to address.

Characterizes multi-class cross-entropy optimization dynamics beyond convex models
Proves gradient flow on cross-entropy converges to neural collapse geometry
Introduces Hadamard Initialization to diagonalize softmax and simplify dynamics analysis
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hadamard initialization diagonalizes softmax operator
Lyapunov function proves global convergence despite spurious points
Analysis reduces dynamics to singular values via diagonalization
🔎 Similar Papers
No similar papers found.
C
Connall Garrod
Mathematical Institute, University of Oxford
J
Jonathan P. Keating
Mathematical Institute, University of Oxford
Christos Thrampoulidis
Christos Thrampoulidis
Assistant Professor, University of British Columbia, ECE Department
data sciencesignal processingoptimizationmachine-learning