🤖 AI Summary
To address the high activation storage overhead and computational redundancy from repeated low-rank decomposition in continual learning on edge devices, this paper proposes an activation compression method based on a one-time higher-order SVD to construct reusable, orthogonal low-rank subspaces. The method eliminates frequent decompositions during backpropagation and enables task-incremental expansion via orthogonal subspace allocation—obviating the need to store large, task-specific activation matrices. Integrated into a joint memory–computation optimization framework, it supports lightweight fine-tuning and mitigates catastrophic forgetting. Experiments demonstrate a 250× reduction in activation storage on multiple image classification benchmarks while matching the accuracy of full-activation backpropagation. On standard continual learning benchmarks, it achieves performance comparable to Orthogonal Gradient Updates, yet with minimal memory overhead—making it highly suitable for resource-constrained edge deployment.
📝 Abstract
On-device learning is essential for personalization, privacy, and long-term adaptation in resource-constrained environments. Achieving this requires efficient learning, both fine-tuning existing models and continually acquiring new tasks without catastrophic forgetting. Yet both settings are constrained by high memory cost of storing activations during backpropagation. Existing activation compression methods reduce this cost but relying on repeated low-rank decompositions, introducing computational overhead. Also, such methods have not been explored for continual learning. We propose LANCE (Low-rank Activation Compression), a framework that performs one-shot higher-order Singular Value Decompsoition (SVD) to obtain a reusable low-rank subspace for activation projection. This eliminates repeated decompositions, reducing both memory and computation. Moreover, fixed low-rank subspaces further enable on-device continual learning by allocating tasks to orthogonal subspaces without storing large task-specific matrices. Experiments show that LANCE reduces activation storage up to 250$ imes$ while maintaining accuracy comparable to full backpropagation on CIFAR-10/100, Oxford-IIIT Pets, Flowers102, and CUB-200 datasets. On continual learning benchmarks (Split CIFAR-100, Split MiniImageNet, 5-Datasets), it achieves performance competitive with orthogonal gradient projection methods at a fraction of the memory cost. These results position LANCE as a practical and scalable solution for efficient fine-tuning and continual learning on edge devices.