🤖 AI Summary
To address the challenges of object editing authenticity and geometric consistency in autonomous driving video data augmentation, this paper proposes a unified video editing framework based on diffusion models. Methodologically, it introduces a novel hierarchical 3D bounding box injection mechanism for precise spatial relocalization; designs a three-level appearance preservation strategy—detail-, semantic-, and 3D-prior-driven—that ensures cross-view appearance consistency using only a single reference image; and integrates NeRF-derived 3D priors, multi-scale feature alignment, and geometric projection constraints. Evaluated on the nuScenes dataset, the framework achieves state-of-the-art performance in editing fidelity and controllability. It significantly enhances generalization across downstream tasks, including 3D object detection and trajectory prediction, demonstrating robustness and practical utility for autonomous driving perception systems.
📝 Abstract
Vision-centric autonomous driving systems require diverse data for robust training and evaluation, which can be augmented by manipulating object positions and appearances within existing scene captures. While recent advancements in diffusion models have shown promise in video editing, their application to object manipulation in driving scenarios remains challenging due to imprecise positional control and difficulties in preserving high-fidelity object appearances. To address these challenges in position and appearance control, we introduce DriveEditor, a diffusion-based framework for object editing in driving videos. DriveEditor offers a unified framework for comprehensive object editing operations, including repositioning, replacement, deletion, and insertion. These diverse manipulations are all achieved through a shared set of varying inputs, processed by identical position control and appearance maintenance modules. The position control module projects the given 3D bounding box while preserving depth information and hierarchically injects it into the diffusion process, enabling precise control over object position and orientation. The appearance maintenance module preserves consistent attributes with a single reference image by employing a three-tiered approach: low-level detail preservation, high-level semantic maintenance, and the integration of 3D priors from a novel view synthesis model. Extensive qualitative and quantitative evaluations on the nuScenes dataset demonstrate DriveEditor's exceptional fidelity and controllability in generating diverse driving scene edits, as well as its remarkable ability to facilitate downstream tasks. Project page: https://yvanliang.github.io/DriveEditor.