LoGoColor: Local-Global 3D Colorization for 360° Scenes

📅 2025-12-09
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Single-channel 3D reconstruction yields geometry without color; existing 3D coloring methods—based on distilling 2D image coloring models—suffer from monotonous and multi-view-inconsistent colors due to view averaging, especially in complex 360° scenes. To address this, we propose a Local-Global dual-granularity consistency modeling framework: sub-scene partitioning enables joint optimization with local geometric alignment constraints and global cross-view color harmonization. We employ a fine-tuned multi-view diffusion model for high-fidelity coloring and introduce the Color Diversity Index (CDI) to quantitatively evaluate color richness. Our method achieves state-of-the-art performance on 360° panoramic scenes, improving PSNR and SSIM by 12.3% and 9.7%, respectively, while significantly enhancing color diversity, naturalness, and multi-view geometric consistency.

Technology Category

Application Category

📝 Abstract
Single-channel 3D reconstruction is widely used in fields such as robotics and medical imaging. While this line of work excels at reconstructing 3D geometry, the outputs are not colored 3D models, thus 3D colorization is required for visualization. Recent 3D colorization studies address this problem by distilling 2D image colorization models. However, these approaches suffer from an inherent inconsistency of 2D image models. This results in colors being averaged during training, leading to monotonous and oversimplified results, particularly in complex 360° scenes. In contrast, we aim to preserve color diversity by generating a new set of consistently colorized training views, thereby bypassing the averaging process. Nevertheless, eliminating the averaging process introduces a new challenge: ensuring strict multi-view consistency across these colorized views. To achieve this, we propose LoGoColor, a pipeline designed to preserve color diversity by eliminating this guidance-averaging process with a `Local-Global' approach: we partition the scene into subscenes and explicitly tackle both inter-subscene and intra-subscene consistency using a fine-tuned multi-view diffusion model. We demonstrate that our method achieves quantitatively and qualitatively more consistent and plausible 3D colorization on complex 360° scenes than existing methods, and validate its superior color diversity using a novel Color Diversity Index.
Problem

Research questions and friction points this paper is trying to address.

Addresses inconsistent colorization from 2D model distillation in 3D scenes
Eliminates color averaging that causes oversimplified results in 360° scenes
Ensures multi-view consistency while preserving color diversity in 3D reconstruction
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local-Global approach partitions scenes into subscenes
Fine-tuned multi-view diffusion model ensures consistency
Generates new colorized training views to avoid averaging
🔎 Similar Papers
No similar papers found.