🤖 AI Summary
Monocular 3D object detection (M3OD) suffers significant performance degradation under real-world domain shifts due to coupled semantic uncertainty (e.g., category ambiguity) and geometric uncertainty (e.g., unstable 3D localization). To address this, we propose the first test-time adaptation (TTA) framework explicitly designed for dual uncertainty. Our method comprises three key components: (1) an unsupervised focal loss formulated in convex form to enable uncertainty-aware, gradient-stable optimization; (2) a semantics-aware normal field constraint that jointly enforces semantic confidence and geometric structural consistency; and (3) a dual-branch collaborative learning mechanism establishing a semantic–geometric complementary optimization loop. Extensive experiments across multiple benchmarks and cross-domain settings demonstrate substantial improvements in detection accuracy and 3D localization stability, with superior generalization over existing TTA approaches.
📝 Abstract
Accurate monocular 3D object detection (M3OD) is pivotal for safety-critical applications like autonomous driving, yet its reliability deteriorates significantly under real-world domain shifts caused by environmental or sensor variations. To address these shifts, Test-Time Adaptation (TTA) methods have emerged, enabling models to adapt to target distributions during inference. While prior TTA approaches recognize the positive correlation between low uncertainty and high generalization ability, they fail to address the dual uncertainty inherent to M3OD: semantic uncertainty (ambiguous class predictions) and geometric uncertainty (unstable spatial localization). To bridge this gap, we propose Dual Uncertainty Optimization (DUO), the first TTA framework designed to jointly minimize both uncertainties for robust M3OD. Through a convex optimization lens, we introduce an innovative convex structure of the focal loss and further derive a novel unsupervised version, enabling label-agnostic uncertainty weighting and balanced learning for high-uncertainty objects. In parallel, we design a semantic-aware normal field constraint that preserves geometric coherence in regions with clear semantic cues, reducing uncertainty from the unstable 3D representation. This dual-branch mechanism forms a complementary loop: enhanced spatial perception improves semantic classification, and robust semantic predictions further refine spatial understanding. Extensive experiments demonstrate the superiority of DUO over existing methods across various datasets and domain shift types.