Improving Deep Regression with Tightness

📅 2025-02-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In deep regression, the ordinal structure of targets is often lost in feature space, leading to high conditional entropy $H(Z|Y)$ and degraded generalization. Method: This paper establishes, for the first time, a theoretical connection between ordinal preservation and conditional entropy minimization; proposes an order-consistency regularization term based on optimal transport; and introduces a target repetition encoding strategy to enhance regression compactness of feature representations from an information-theoretic perspective. Results: The method achieves significant performance gains across three real-world regression tasks. Experiments confirm its effectiveness in reducing $H(Z|Y)$, preserving target similarity structure, and improving generalization. Key contributions are: (1) a theoretical characterization of how ordinal constraints optimize the information bottleneck; (2) a differentiable, unsupervised optimal transport-based regularization framework; and (3) a novel paradigm for feature disentanglement and structural preservation tailored to regression tasks.

Technology Category

Application Category

📝 Abstract
For deep regression, preserving the ordinality of the targets with respect to the feature representation improves performance across various tasks. However, a theoretical explanation for the benefits of ordinality is still lacking. This work reveals that preserving ordinality reduces the conditional entropy $H(Z|Y)$ of representation $Z$ conditional on the target $Y$. However, our findings reveal that typical regression losses do little to reduce $H(Z|Y)$, even though it is vital for generalization performance. With this motivation, we introduce an optimal transport-based regularizer to preserve the similarity relationships of targets in the feature space to reduce $H(Z|Y)$. Additionally, we introduce a simple yet efficient strategy of duplicating the regressor targets, also with the aim of reducing $H(Z|Y)$. Experiments on three real-world regression tasks verify the effectiveness of our strategies to improve deep regression. Code: https://github.com/needylove/Regression_tightness.
Problem

Research questions and friction points this paper is trying to address.

Preserve ordinality in deep regression
Reduce conditional entropy H(Z|Y)
Optimize generalization with transport-based regularizer
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal transport-based regularizer preserves target similarity
Duplicating regressor targets reduces conditional entropy
Strategies improve deep regression performance effectively
🔎 Similar Papers
No similar papers found.