DySS: Dynamic Queries and State-Space Learning for Efficient 3D Object Detection from Multi-Camera Videos

📅 2025-06-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high computational cost and inefficiency of static queries in multi-camera video BEV 3D detection, this work pioneers the integration of State Space Models (SSMs) for multi-frame temporal modeling. We propose a dynamic, variable-length query mechanism that adaptively refines the query set via merging, deletion, and splitting operations, jointly optimized with two auxiliary tasks: future BEV prediction and masked BEV reconstruction. This framework enables sparse BEV feature representation and efficient temporal modeling. On the nuScenes test set, our method achieves 65.31 NDS and 57.4 mAP—surpassing prior state-of-the-art methods—while attaining 56.2 NDS and 46.2 mAP on the validation set at 33 FPS inference speed. The approach thus delivers a significant improvement in both accuracy and real-time performance.

Technology Category

Application Category

📝 Abstract
Camera-based 3D object detection in Bird's Eye View (BEV) is one of the most important perception tasks in autonomous driving. Earlier methods rely on dense BEV features, which are costly to construct. More recent works explore sparse query-based detection. However, they still require a large number of queries and can become expensive to run when more video frames are used. In this paper, we propose DySS, a novel method that employs state-space learning and dynamic queries. More specifically, DySS leverages a state-space model (SSM) to sequentially process the sampled features over time steps. In order to encourage the model to better capture the underlying motion and correspondence information, we introduce auxiliary tasks of future prediction and masked reconstruction to better train the SSM. The state of the SSM then provides an informative yet efficient summarization of the scene. Based on the state-space learned features, we dynamically update the queries via merge, remove, and split operations, which help maintain a useful, lean set of detection queries throughout the network. Our proposed DySS achieves both superior detection performance and efficient inference. Specifically, on the nuScenes test split, DySS achieves 65.31 NDS and 57.4 mAP, outperforming the latest state of the art. On the val split, DySS achieves 56.2 NDS and 46.2 mAP, as well as a real-time inference speed of 33 FPS.
Problem

Research questions and friction points this paper is trying to address.

Efficient 3D object detection from multi-camera videos
Reducing computational cost of sparse query-based detection
Improving motion and correspondence learning in BEV
Innovation

Methods, ideas, or system contributions that make the work stand out.

State-space model for sequential feature processing
Dynamic queries via merge, remove, split
Auxiliary tasks enhance motion and correspondence learning
🔎 Similar Papers
No similar papers found.
R
R. Yasarla
Qualcomm AI Research*
Shizhong Han
Shizhong Han
Johns Hopkins
Psychiatric Geneticsgenetic epidemiologybioinformatics
H
Hong Cai
Qualcomm AI Research*
F
F. Porikli
Qualcomm AI Research*