🤖 AI Summary
Memory bottlenecks from storing layer activations severely hinder scalability in neural network training. This paper introduces, for the first time, randomized matrix sketching for activation compression, proposing a tri-sketch mechanism that integrates exponential moving average (EMA) and adaptive rank adjustment to efficiently reconstruct gradients during backpropagation. Inspired by control theory and trajectory compression in dynamical systems, the method achieves real-time, dynamic trade-offs between memory footprint and gradient fidelity. Experiments on MNIST, CIFAR-10, and physics-informed neural networks (PINNs) demonstrate up to 72% GPU memory reduction, enable low-overhead real-time gradient norm monitoring, and preserve convergence behavior and final accuracy with negligible degradation. The core contribution is a sketch-driven, differentiable activation compression framework—establishing a lightweight, controllable, and theoretically interpretable paradigm for memory-efficient large-scale neural network training.
📝 Abstract
Neural network training relies on gradient computation through backpropagation, yet memory requirements for storing layer activations present significant scalability challenges. We present the first adaptation of control-theoretic matrix sketching to neural network layer activations, enabling memory-efficient gradient reconstruction in backpropagation. This work builds on recent matrix sketching frameworks for dynamic optimization problems, where similar state trajectory storage challenges motivate sketching techniques. Our approach sketches layer activations using three complementary sketch matrices maintained through exponential moving averages (EMA) with adaptive rank adjustment, automatically balancing memory efficiency against approximation quality. Empirical evaluation on MNIST, CIFAR-10, and physics-informed neural networks demonstrates a controllable accuracy-memory tradeoff. We demonstrate a gradient monitoring application on MNIST showing how sketched activations enable real-time gradient norm tracking with minimal memory overhead. These results establish that sketched activation storage provides a viable path toward memory-efficient neural network training and analysis.