🤖 AI Summary
To address the limited cross-scene transferability in 3D object detection, this paper proposes Scene-Oriented Prompt Pooling (SOP²), the first systematic investigation of prompt tuning for 3D vision tasks. Leveraging a large-scale 3D detector pre-trained on Waymo, SOP² introduces a learnable, scene-specific prompt pool coupled with a lightweight prompt generator to efficiently adapt the model to unseen scenes—without fine-tuning the backbone network, thereby substantially reducing adaptation overhead. Extensive experiments on cross-domain benchmarks, including nuScenes, demonstrate that SOP² consistently improves mean Average Precision (mAP) by 3.2% on average. These results validate the effectiveness of prompt pooling in encoding scene priors and facilitating knowledge transfer in 3D detection. Moreover, SOP² establishes a novel paradigm for prompt learning in 3D vision, bridging the gap between vision-language prompting and geometric perception tasks.
📝 Abstract
With the rise of Large Language Models (LLMs) such as GPT-3, these models exhibit strong generalization capabilities. Through transfer learning techniques such as fine-tuning and prompt tuning, they can be adapted to various downstream tasks with minimal parameter adjustments. This approach is particularly common in the field of Natural Language Processing (NLP). This paper aims to explore the effectiveness of common prompt tuning methods in 3D object detection. We investigate whether a model trained on the large-scale Waymo dataset can serve as a foundation model and adapt to other scenarios within the 3D object detection field. This paper sequentially examines the impact of prompt tokens and prompt generators, and further proposes a Scene-Oriented Prompt Pool ( extbf{SOP$^2$}). We demonstrate the effectiveness of prompt pools in 3D object detection, with the goal of inspiring future researchers to delve deeper into the potential of prompts in the 3D field.