CrossModalityDiffusion: Multi-Modal Novel View Synthesis with Unified Intermediate Representation

📅 2025-01-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenging problem of novel-view synthesis across remote sensing modalities (EO/SAR/LiDAR) without ground-truth geometry or cross-modal correspondences. We propose a geometry-aware unified 3D voxel feature representation framework. Our method employs modality-specific encoders to extract geometrically consistent voxel features, achieves multi-modal fusion via voxel reprojection and differentiable volumetric rendering, and synthesizes target-modality novel views using a conditional diffusion model. Crucially, we introduce the first end-to-end joint training paradigm that requires neither cross-modal registration nor explicit geometric supervision—enabling arbitrary input/output modality combinations. Evaluated on ShapeNet Cars, our approach generates novel views with high visual fidelity and strong geometric consistency. Quantitative results significantly surpass single-modality baselines across standard metrics (e.g., PSNR, SSIM, LPIPS), demonstrating superior generalization and robustness under sparse, heterogeneous sensor inputs.

Technology Category

Application Category

📝 Abstract
Geospatial imaging leverages data from diverse sensing modalities-such as EO, SAR, and LiDAR, ranging from ground-level drones to satellite views. These heterogeneous inputs offer significant opportunities for scene understanding but present challenges in interpreting geometry accurately, particularly in the absence of precise ground truth data. To address this, we propose CrossModalityDiffusion, a modular framework designed to generate images across different modalities and viewpoints without prior knowledge of scene geometry. CrossModalityDiffusion employs modality-specific encoders that take multiple input images and produce geometry-aware feature volumes that encode scene structure relative to their input camera positions. The space where the feature volumes are placed acts as a common ground for unifying input modalities. These feature volumes are overlapped and rendered into feature images from novel perspectives using volumetric rendering techniques. The rendered feature images are used as conditioning inputs for a modality-specific diffusion model, enabling the synthesis of novel images for the desired output modality. In this paper, we show that jointly training different modules ensures consistent geometric understanding across all modalities within the framework. We validate CrossModalityDiffusion's capabilities on the synthetic ShapeNet cars dataset, demonstrating its effectiveness in generating accurate and consistent novel views across multiple imaging modalities and perspectives.
Problem

Research questions and friction points this paper is trying to address.

Image Realism
Sensor Fusion
Scene Understanding
Innovation

Methods, ideas, or system contributions that make the work stand out.

CrossModalityDiffusion
Multi-sensor Image Generation
Shape-Agnostic Representation
🔎 Similar Papers
No similar papers found.
A
Alex Berian
University of Arizona: ECE Dept.
D
Daniel Brignac
University of Arizona: ECE Dept.
J
JhihYang Wu
University of Arizona: ECE Dept.
N
Natnael Daba
University of Arizona: ECE Dept.
Abhijit Mahalanobis
Abhijit Mahalanobis
Department of ECE, University of Arizona
Pattern RecognitionImage ProcessingCompressive Sensing