Advancing 3D Gaussian Splatting Editing with Complementary and Consensus Information

📅 2025-03-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses texture artifacts and blurred object boundaries in text-guided 3D Gaussian Splatting editing, caused by multi-view geometric inconsistency and insufficient depth utilization. To tackle these issues, we propose two core innovations: (1) a complementary information mutual learning network that significantly improves cross-view depth estimation accuracy; and (2) a wavelet consensus attention mechanism that aligns latent-space representations across views during diffusion denoising. Our method tightly integrates 3D Gaussian Splatting, depth-conditioned editing, and implicit modeling via diffusion priors, preserving geometric fidelity while enhancing texture plausibility and boundary sharpness. Extensive experiments demonstrate that our approach achieves state-of-the-art performance across key metrics—including rendering quality, multi-view consistency, and editing fidelity—outperforming existing methods comprehensively.

Technology Category

Application Category

📝 Abstract
We present a novel framework for enhancing the visual fidelity and consistency of text-guided 3D Gaussian Splatting (3DGS) editing. Existing editing approaches face two critical challenges: inconsistent geometric reconstructions across multiple viewpoints, particularly in challenging camera positions, and ineffective utilization of depth information during image manipulation, resulting in over-texture artifacts and degraded object boundaries. To address these limitations, we introduce: 1) A complementary information mutual learning network that enhances depth map estimation from 3DGS, enabling precise depth-conditioned 3D editing while preserving geometric structures. 2) A wavelet consensus attention mechanism that effectively aligns latent codes during the diffusion denoising process, ensuring multi-view consistency in the edited results. Through extensive experimentation, our method demonstrates superior performance in rendering quality and view consistency compared to state-of-the-art approaches. The results validate our framework as an effective solution for text-guided editing of 3D scenes.
Problem

Research questions and friction points this paper is trying to address.

Enhances visual fidelity in 3D Gaussian Splatting editing
Addresses inconsistent geometric reconstructions across viewpoints
Improves depth information utilization to reduce artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

Complementary information mutual learning network
Wavelet consensus attention mechanism
Depth-conditioned 3D editing preservation
🔎 Similar Papers
No similar papers found.
X
Xuanqi Zhang
University of Ottawa
J
Jieun Lee
Hansung University
Chris Joslin
Chris Joslin
Carleton University
Computer Graphics & AnimationMotion TrackingCollaborative Virtual EnvironmentsVideo Compression & Adaptation
W
Wonsook Lee
University of Ottawa