Balanced Rate-Distortion Optimization in Learned Image Compression

πŸ“… 2025-02-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
In learned image compression (LIC), rate-distortion (R-D) optimization suffers from gradient imbalance during end-to-end training, leading to objective dominance and suboptimal convergence. To address this, this work reformulates R-D optimization as a multi-objective optimization (MOO) problemβ€”the first such formulation in LIC. We propose a dual-path adaptive gradient balancing framework: (i) a coarse-to-fine gradient descent strategy for training from scratch, and (ii) an equality-constrained quadratic programming (QP) solver for fine-tuning. Our method dynamically reweights gradients, overcoming the inherent bias of conventional scalarized weighted-sum approaches. Evaluated on multiple standard benchmarks, it achieves approximately 2% BD-Rate reduction over strong baselines, with more balanced and efficient optimization and manageable training overhead. The implementation is publicly available.

Technology Category

Application Category

πŸ“ Abstract
Learned image compression (LIC) using deep learning architectures has seen significant advancements, yet standard rate-distortion (R-D) optimization often encounters imbalanced updates due to diverse gradients of the rate and distortion objectives. This imbalance can lead to suboptimal optimization, where one objective dominates, thereby reducing overall compression efficiency. To address this challenge, we reformulate R-D optimization as a multi-objective optimization (MOO) problem and introduce two balanced R-D optimization strategies that adaptively adjust gradient updates to achieve more equitable improvements in both rate and distortion. The first proposed strategy utilizes a coarse-to-fine gradient descent approach along standard R-D optimization trajectories, making it particularly suitable for training LIC models from scratch. The second proposed strategy analytically addresses the reformulated optimization as a quadratic programming problem with an equality constraint, which is ideal for fine-tuning existing models. Experimental results demonstrate that both proposed methods enhance the R-D performance of LIC models, achieving around a 2% BD-Rate reduction with acceptable additional training cost, leading to a more balanced and efficient optimization process. The code will be made publicly available.
Problem

Research questions and friction points this paper is trying to address.

Address imbalanced updates in rate-distortion optimization
Enhance compression efficiency via balanced gradient adjustments
Improve R-D performance in learned image compression
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-objective optimization reformulation
Coarse-to-fine gradient descent
Quadratic programming with equality constraint
πŸ”Ž Similar Papers
No similar papers found.
Y
Yichi Zhang
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, U.S.A.
Zhihao Duan
Zhihao Duan
PhD candidate, Purdue University
machine learningcomputer visiondata compression
Yuning Huang
Yuning Huang
PhD Candidate, Purdue University
Computer VisionDeep Learning
F
Fengqing Zhu
Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana, U.S.A.