DriveEditor: A Unified 3D Information-Guided Framework for Controllable Object Editing in Driving Scenes

📅 2024-12-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenges of object editing authenticity and geometric consistency in autonomous driving video data augmentation, this paper proposes a unified video editing framework based on diffusion models. Methodologically, it introduces a novel hierarchical 3D bounding box injection mechanism for precise spatial relocalization; designs a three-level appearance preservation strategy—detail-, semantic-, and 3D-prior-driven—that ensures cross-view appearance consistency using only a single reference image; and integrates NeRF-derived 3D priors, multi-scale feature alignment, and geometric projection constraints. Evaluated on the nuScenes dataset, the framework achieves state-of-the-art performance in editing fidelity and controllability. It significantly enhances generalization across downstream tasks, including 3D object detection and trajectory prediction, demonstrating robustness and practical utility for autonomous driving perception systems.

Technology Category

Application Category

📝 Abstract
Vision-centric autonomous driving systems require diverse data for robust training and evaluation, which can be augmented by manipulating object positions and appearances within existing scene captures. While recent advancements in diffusion models have shown promise in video editing, their application to object manipulation in driving scenarios remains challenging due to imprecise positional control and difficulties in preserving high-fidelity object appearances. To address these challenges in position and appearance control, we introduce DriveEditor, a diffusion-based framework for object editing in driving videos. DriveEditor offers a unified framework for comprehensive object editing operations, including repositioning, replacement, deletion, and insertion. These diverse manipulations are all achieved through a shared set of varying inputs, processed by identical position control and appearance maintenance modules. The position control module projects the given 3D bounding box while preserving depth information and hierarchically injects it into the diffusion process, enabling precise control over object position and orientation. The appearance maintenance module preserves consistent attributes with a single reference image by employing a three-tiered approach: low-level detail preservation, high-level semantic maintenance, and the integration of 3D priors from a novel view synthesis model. Extensive qualitative and quantitative evaluations on the nuScenes dataset demonstrate DriveEditor's exceptional fidelity and controllability in generating diverse driving scene edits, as well as its remarkable ability to facilitate downstream tasks. Project page: https://yvanliang.github.io/DriveEditor.
Problem

Research questions and friction points this paper is trying to address.

Autonomous driving
Video manipulation
Photorealism
Innovation

Methods, ideas, or system contributions that make the work stand out.

3D object editing
autonomous vehicle training
realism enhancement
🔎 Similar Papers
No similar papers found.
Yiyuan Liang
Yiyuan Liang
Huazhong University of Science and Technology
Z
Zhiying Yan
Huazhong University of Science and Technology, National Key Laboratory of Multispectral Information Intelligent Processing Technology
L
Liqun Chen
Huazhong University of Science and Technology, National Key Laboratory of Multispectral Information Intelligent Processing Technology
Jiahuan Zhou
Jiahuan Zhou
Peking University
Computer VisionMachine LearningDeep Learning
Luxin Yan
Luxin Yan
Huazhong University of Science and Technology
Computer VisionImage ProcessingDeep Learning
Sheng Zhong
Sheng Zhong
Nanjing University
computer networkssecurity and privacytheory of computing
Xu Zou
Xu Zou
Z.ai
language generationreasoningworld modeling