🤖 AI Summary
In visual 3D occupancy prediction, forward projection from 2D to 3D introduces severe height ambiguity—features from distinct vertical layers interfere with one another. To address this, we propose a depth-height decoupling framework. First, we introduce height-map supervision to explicitly encode geometric priors. Second, we design a mask-guided height sampling (MGHS) module that achieves adaptive binary decoupling along the height dimension. Third, we employ synergistic feature aggregation (SFA) to optimize cross-subspace feature fusion between bird’s-eye view (BEV) and voxel spaces. Our approach significantly mitigates feature confusion caused by height aliasing. Evaluated on Occ3D-nuScenes, it achieves state-of-the-art performance using the minimal number of input frames. The code is publicly available.
📝 Abstract
The task of vision-based 3D occupancy prediction aims to reconstruct 3D geometry and estimate its semantic classes from 2D color images, where the 2D-to-3D view transformation is an indispensable step. Most previous methods conduct forward projection, such as BEVPooling and VoxelPooling, both of which map the 2D image features into 3D grids. However, the current grid representing features within a certain height range usually introduces many confusing features that belong to other height ranges. To address this challenge, we present Deep Height Decoupling (DHD), a novel framework that incorporates explicit height prior to filter out the confusing features. Specifically, DHD first predicts height maps via explicit supervision. Based on the height distribution statistics, DHD designs Mask Guided Height Sampling (MGHS) to adaptively decouple the height map into multiple binary masks. MGHS projects the 2D image features into multiple subspaces, where each grid contains features within reasonable height ranges. Finally, a Synergistic Feature Aggregation (SFA) module is deployed to enhance the feature representation through channel and spatial affinities, enabling further occupancy refinement. On the popular Occ3D-nuScenes benchmark, our method achieves state-of-the-art performance even with minimal input frames. Source code is released at https://github.com/yanzq95/DHD.