Beyond Logit Adjustment: A Residual Decomposition Framework for Long-Tailed Reranking

๐Ÿ“… 2026-04-01
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the tendency of models in long-tailed classification to favor frequent classes during inference, which degrades ranking performance on rare classes. From a Bayes-optimal re-ranking perspective, the authors propose a residual decomposition theory that decouples the correction term into a class-specific offset and an input-dependent pairwise interaction term. They theoretically reveal the limitations of the former and establish verifiable conditions under which the latter is effective. Building on these insights, they design REPAIRโ€”a lightweight post-hoc re-ranker that combines a shrinkage-stabilized class term with a linear pairwise term based on competitive features. Experiments across five benchmarks, including image, species, scene, and rare disease diagnosis datasets, demonstrate the methodโ€™s efficacy and its ability to accurately identify scenarios where pairwise correction is essential.
๐Ÿ“ Abstract
Long-tailed classification, where a small number of frequent classes dominate many rare ones, remains challenging because models systematically favor frequent classes at inference time. Existing post-hoc methods such as logit adjustment address this by adding a fixed classwise offset to the base-model logits. However, the correction required to restore the relative ranking of two classes need not be constant across inputs, and a fixed offset cannot adapt to such variation. We study this problem through Bayes-optimal reranking on a base-model top-k shortlist. The gap between the optimal score and the base score, the residual correction, decomposes into a classwise component that is constant within each class, and a pairwise component that depends on the input and competing labels. When the residual is purely classwise, a fixed offset suffices to recover the Bayes-optimal ordering. We further show that when the same label pair induces incompatible ordering constraints across contexts, no fixed offset can achieve this recovery. This decomposition leads to testable predictions regarding when pairwise correction can improve performance and when cannot. We develop REPAIR (Reranking via Pairwise residual correction), a lightweight post-hoc reranker that combines a shrinkage-stabilized classwise term with a linear pairwise term driven by competition features on the shortlist. Experiments on five benchmarks spanning image classification, species recognition, scene recognition, and rare disease diagnosis confirm that the decomposition explains where pairwise correction helps and where classwise correction alone suffices.
Problem

Research questions and friction points this paper is trying to address.

long-tailed classification
logit adjustment
reranking
residual correction
class imbalance
Innovation

Methods, ideas, or system contributions that make the work stand out.

residual decomposition
long-tailed classification
pairwise correction
post-hoc reranking
Bayes-optimal ranking
๐Ÿ”Ž Similar Papers
No similar papers found.
Z
Zhanliang Wang
University of Pennsylvania, Philadelphia, PA, USA; Children's Hospital of Philadelphia, Philadelphia, PA, USA
H
Hongzhuo Chen
University of Pennsylvania, Philadelphia, PA, USA; Children's Hospital of Philadelphia, Philadelphia, PA, USA
Quan Minh Nguyen
Quan Minh Nguyen
PhD Student, Unversity of Florida
Computer VisionFederated LearningDifferential PrivacyAdversarial ML
M
Mian Umair Ahsan
Children's Hospital of Philadelphia, Philadelphia, PA, USA
Kai Wang
Kai Wang
Children's Hospital of Philadelphia
genomicsbioinformaticsbiomedical informaticsmultimodal AI