Randomized Matrix Sketching for Neural Network Training and Gradient Monitoring

📅 2025-09-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Memory bottlenecks from storing layer activations severely hinder scalability in neural network training. This paper introduces, for the first time, randomized matrix sketching for activation compression, proposing a tri-sketch mechanism that integrates exponential moving average (EMA) and adaptive rank adjustment to efficiently reconstruct gradients during backpropagation. Inspired by control theory and trajectory compression in dynamical systems, the method achieves real-time, dynamic trade-offs between memory footprint and gradient fidelity. Experiments on MNIST, CIFAR-10, and physics-informed neural networks (PINNs) demonstrate up to 72% GPU memory reduction, enable low-overhead real-time gradient norm monitoring, and preserve convergence behavior and final accuracy with negligible degradation. The core contribution is a sketch-driven, differentiable activation compression framework—establishing a lightweight, controllable, and theoretically interpretable paradigm for memory-efficient large-scale neural network training.

Technology Category

Application Category

📝 Abstract
Neural network training relies on gradient computation through backpropagation, yet memory requirements for storing layer activations present significant scalability challenges. We present the first adaptation of control-theoretic matrix sketching to neural network layer activations, enabling memory-efficient gradient reconstruction in backpropagation. This work builds on recent matrix sketching frameworks for dynamic optimization problems, where similar state trajectory storage challenges motivate sketching techniques. Our approach sketches layer activations using three complementary sketch matrices maintained through exponential moving averages (EMA) with adaptive rank adjustment, automatically balancing memory efficiency against approximation quality. Empirical evaluation on MNIST, CIFAR-10, and physics-informed neural networks demonstrates a controllable accuracy-memory tradeoff. We demonstrate a gradient monitoring application on MNIST showing how sketched activations enable real-time gradient norm tracking with minimal memory overhead. These results establish that sketched activation storage provides a viable path toward memory-efficient neural network training and analysis.
Problem

Research questions and friction points this paper is trying to address.

Reducing memory usage for storing neural network activations during training
Enabling memory-efficient gradient reconstruction through matrix sketching techniques
Providing real-time gradient monitoring with minimal memory overhead
Innovation

Methods, ideas, or system contributions that make the work stand out.

Matrix sketching for neural network activations
EMA-based adaptive rank adjustment for memory efficiency
Real-time gradient monitoring with minimal memory overhead
🔎 Similar Papers
No similar papers found.