π€ AI Summary
Zeroth-order (ZO) optimization methods for large language model (LLM) fine-tuning suffer from high gradient estimation variance and suboptimal search directions. To address this, we propose LORENβa curvature-aware ZO optimization algorithm. Its core innovation lies in modeling gradient preconditioning as an anisotropic perturbation distribution estimation problem: it introduces a low-rank block-diagonal preconditioner to adaptively capture Hessian curvature information, and integrates a REINFORCE leave-one-out (RLOO) estimator to achieve variance-reduced finite-difference gradient estimation. Experiments on standard LLM fine-tuning benchmarks demonstrate that LOREN achieves faster convergence and higher final accuracy compared to state-of-the-art ZO baselines. Moreover, it reduces peak memory usage by 27.3% relative to MeZO-Adam, significantly improving both the efficiency and stability of zeroth-order optimization for LLM adaptation.
π Abstract
We introduce LOREN, a curvature-aware zeroth-order (ZO) optimization method for fine-tuning large language models (LLMs). Existing ZO methods, which estimate gradients via finite differences using random perturbations, often suffer from high variance and suboptimal search directions. Our approach addresses these challenges by: (i) reformulating the problem of gradient preconditioning as that of adaptively estimating an anisotropic perturbation distribution for gradient estimation, (ii) capturing curvature through a low-rank block diagonal preconditioner using the framework of natural evolution strategies, and (iii) applying a REINFORCE leave-one-out (RLOO) gradient estimator to reduce variance. Experiments on standard LLM benchmarks show that our method outperforms state-of-the-art ZO methods by achieving higher accuracy and faster convergence, while cutting peak memory usage by up to 27.3% compared with MeZO-Adam.