Enhanced Derivative-Free Optimization Using Adaptive Correlation-Induced Finite Difference Estimators

📅 2025-02-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Derivative-free optimization (DFO) suffers from high bias in gradient estimation and low sample efficiency due to noisy, uncorrelated finite-difference approximations. Method: This paper proposes a correlation-induced batched finite-difference estimator—the first to explicitly model inter-sample correlation in gradient estimation—and introduces a dynamic adaptive batch-size mechanism alongside a prior-free stochastic line search step-size strategy. Contribution/Results: We establish theoretical consistency of the algorithm and prove its convergence rate matches that of classical Kiefer–Wolfowitz (KW) and Simultaneous Perturbation Stochastic Approximation (SPSA) methods. Empirical evaluation on diverse non-convex benchmark functions demonstrates substantial improvements over state-of-the-art DFO approaches: gradient estimation accuracy is significantly enhanced, and sample efficiency increases by 30%–65%.

Technology Category

Application Category

📝 Abstract
Gradient-based methods are well-suited for derivative-free optimization (DFO), where finite-difference (FD) estimates are commonly used as gradient surrogates. Traditional stochastic approximation methods, such as Kiefer-Wolfowitz (KW) and simultaneous perturbation stochastic approximation (SPSA), typically utilize only two samples per iteration, resulting in imprecise gradient estimates and necessitating diminishing step sizes for convergence. In this paper, we first explore an efficient FD estimate, referred to as correlation-induced FD estimate, which is a batch-based estimate. Then, we propose an adaptive sampling strategy that dynamically determines the batch size at each iteration. By combining these two components, we develop an algorithm designed to enhance DFO in terms of both gradient estimation efficiency and sample efficiency. Furthermore, we establish the consistency of our proposed algorithm and demonstrate that, despite using a batch of samples per iteration, it achieves the same convergence rate as the KW and SPSA methods. Additionally, we propose a novel stochastic line search technique to adaptively tune the step size in practice. Finally, comprehensive numerical experiments confirm the superior empirical performance of the proposed algorithm.
Problem

Research questions and friction points this paper is trying to address.

Improves gradient estimation in derivative-free optimization.
Proposes adaptive sampling for dynamic batch size determination.
Enhances convergence rate and sample efficiency in DFO.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Correlation-induced FD for efficient gradient estimation
Adaptive sampling strategy for dynamic batch sizing
Stochastic line search for adaptive step size tuning
🔎 Similar Papers
No similar papers found.
G
Guo Liang
Institute of Statistics and Big Data, Renmin University of China, Beijing, China
Guangwu Liu
Guangwu Liu
Professor of Management Science, City University of Hong Kong
Stochastic SimulationFinancial EngineeringRisk Management
K
Kun Zhang
Institute of Statistics and Big Data, Renmin University of China, Beijing, China