π€ AI Summary
Cross-domain object counting suffers from performance degradation due to density distribution shiftsβtask-dependent domain discrepancies that violate conventional domain adaptation assumptions. To address this, we propose a novel conditional feature alignment paradigm based on semantically meaningful partitions (e.g., foreground/background), and formally introduce *conditional divergence*, proving it yields a tighter bound on the source-target joint decision error while preserving task-relevant variations and suppressing task-irrelevant domain shifts. Our method integrates conditional feature alignment, discrete label-space modeling, density-aware domain partitioning, and unsupervised optimization. Extensive experiments on multiple heterogeneous density counting benchmarks demonstrate substantial improvements over state-of-the-art unsupervised domain adaptation approaches. Theoretical guarantees are empirically validated, confirming the efficacy of our conditional divergence formulation and alignment strategy in mitigating density-related domain gaps.
π Abstract
Object counting models suffer when deployed across domains with differing density variety, since density shifts are inherently task-relevant and violate standard domain adaptation assumptions. To address this, we propose a theoretical framework of conditional feature alignment. We first formalize the notion of conditional divergence by partitioning each domain into subsets (e.g., object vs. background) and measuring divergences per condition. We then derive a joint error bound showing that, under discrete label spaces treated as condition sets, aligning distributions conditionally leads to tighter bounds on the combined source-target decision error than unconditional alignment. These insights motivate a general conditional adaptation principle: by preserving task-relevant variations while filtering out nuisance shifts, one can achieve superior cross-domain generalization for counting. We provide both defining conditional divergence then proving its benefit in lowering joint error and a practical adaptation strategy that preserves task-relevant information in unsupervised domain-adaptive counting. We demonstrate the effectiveness of our approach through extensive experiments on multiple counting datasets with varying density distributions. The results show that our method outperforms existing unsupervised domain adaptation methods, empirically validating the theoretical insights on conditional feature alignment.