Low-Rank Curvature for Zeroth-Order Optimization in LLM Fine-Tuning

πŸ“… 2025-11-11
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Zeroth-order (ZO) optimization methods for large language model (LLM) fine-tuning suffer from high gradient estimation variance and suboptimal search directions. To address this, we propose LORENβ€”a curvature-aware ZO optimization algorithm. Its core innovation lies in modeling gradient preconditioning as an anisotropic perturbation distribution estimation problem: it introduces a low-rank block-diagonal preconditioner to adaptively capture Hessian curvature information, and integrates a REINFORCE leave-one-out (RLOO) estimator to achieve variance-reduced finite-difference gradient estimation. Experiments on standard LLM fine-tuning benchmarks demonstrate that LOREN achieves faster convergence and higher final accuracy compared to state-of-the-art ZO baselines. Moreover, it reduces peak memory usage by 27.3% relative to MeZO-Adam, significantly improving both the efficiency and stability of zeroth-order optimization for LLM adaptation.

Technology Category

Application Category

πŸ“ Abstract
We introduce LOREN, a curvature-aware zeroth-order (ZO) optimization method for fine-tuning large language models (LLMs). Existing ZO methods, which estimate gradients via finite differences using random perturbations, often suffer from high variance and suboptimal search directions. Our approach addresses these challenges by: (i) reformulating the problem of gradient preconditioning as that of adaptively estimating an anisotropic perturbation distribution for gradient estimation, (ii) capturing curvature through a low-rank block diagonal preconditioner using the framework of natural evolution strategies, and (iii) applying a REINFORCE leave-one-out (RLOO) gradient estimator to reduce variance. Experiments on standard LLM benchmarks show that our method outperforms state-of-the-art ZO methods by achieving higher accuracy and faster convergence, while cutting peak memory usage by up to 27.3% compared with MeZO-Adam.
Problem

Research questions and friction points this paper is trying to address.

Reduces variance in zeroth-order LLM fine-tuning
Improves gradient estimation via curvature-aware perturbations
Enhances optimization efficiency with low-rank preconditioning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptively estimates anisotropic perturbation distribution for gradients
Uses low-rank block diagonal preconditioner for curvature
Applies REINFORCE leave-one-out estimator to reduce variance
πŸ”Ž Similar Papers
No similar papers found.