🤖 AI Summary
In multi-view pure-vision 3D perception, dynamic objects impede temporal alignment of bird’s-eye-view (BEV) features across frames. To address this, we propose the Motion-guided BEV Fusion Network (MBFNet), which introduces a recurrent online temporal fusion architecture that explicitly models object motion and achieves spatiotemporal alignment of BEV features across frames. MBFNet incorporates a motion-guided dynamic alignment mechanism and an end-to-end differentiable temporal consistency loss to enhance fusion quality between historical and current BEV features. Crucially, the method requires no auxiliary motion annotations and is trained end-to-end solely from image inputs. Evaluated on the nuScenes benchmark, MBFNet achieves 63.9% NDS—the highest performance among pure-vision 3D detectors at the time—demonstrating for the first time the critical role of efficient online temporal fusion in learning robust dynamic BEV representations.
📝 Abstract
Multi-view camera-based 3D perception can be conducted using bird's eye view (BEV) features obtained through perspective view-to-BEV transformations. Several studies have shown that the performance of these 3D perception methods can be further enhanced by combining sequential BEV features obtained from multiple camera frames. However, even after compensating for the ego-motion of an autonomous agent, the performance gain from temporal aggregation is limited when combining a large number of image frames. This limitation arises due to dynamic changes in BEV features over time caused by object motion. In this paper, we introduce a novel temporal 3D perception method called OnlineBEV, which combines BEV features over time using a recurrent structure. This structure increases the effective number of combined features with minimal memory usage. However, it is critical to spatially align the features over time to maintain strong performance. OnlineBEV employs the Motion-guided BEV Fusion Network (MBFNet) to achieve temporal feature alignment. MBFNet extracts motion features from consecutive BEV frames and dynamically aligns historical BEV features with current ones using these motion features. To enforce temporal feature alignment explicitly, we use Temporal Consistency Learning Loss, which captures discrepancies between historical and target BEV features. Experiments conducted on the nuScenes benchmark demonstrate that OnlineBEV achieves significant performance gains over the current best method, SOLOFusion. OnlineBEV achieves 63.9% NDS on the nuScenes test set, recording state-of-the-art performance in the camera-only 3D object detection task.