🤖 AI Summary
This work addresses the inefficiency of existing zeroth-order optimization methods in memory-constrained scenarios, where isotropic perturbations neglect the structural information embedded in forward-pass activations, leading to suboptimal fine-tuning of large language models. To overcome this limitation, we propose Activation-Guided Zeroth-Order Optimization (AGZO), which reveals for the first time that gradients of linear layers inherently reside in a low-rank subspace spanned by input activations. Leveraging this insight, AGZO constructs a subspace-aware perturbation mechanism that confines zeroth-order updates to this low-dimensional subspace, significantly improving alignment between estimated update directions and true gradients. Theoretical analysis and experiments demonstrate that AGZO substantially outperforms existing zeroth-order methods on large models such as Qwen3 and Pangu, markedly narrowing the performance gap with first-order fine-tuning while maintaining comparable peak memory consumption.
📝 Abstract
Zeroth-Order (ZO) optimization has emerged as a promising solution for fine-tuning LLMs under strict memory constraints, as it avoids the prohibitive memory cost of storing activations for backpropagation. However, existing ZO methods typically employ isotropic perturbations, neglecting the rich structural information available during the forward pass. In this paper, we identify a crucial link between gradient formation and activation structure: the gradient of a linear layer is confined to the subspace spanned by its input activations. Leveraging this insight, we propose Activation-Guided Zeroth-Order optimization (AGZO). Unlike prior methods, AGZO extracts a compact, activation-informed subspace on the fly during the forward pass and restricts perturbations to this low-rank subspace. We provide a theoretical framework showing that AGZO optimizes a subspace-smoothed objective and provably yields update directions with higher cosine similarity to the true gradient than isotropic baselines. Empirically, we evaluate AGZO on Qwen3 and Pangu models across various benchmarks. AGZO consistently outperforms state-of-the-art ZO baselines and significantly narrows the performance gap with first-order fine-tuning, while maintaining almost the same peak memory footprint as other ZO methods.