DiffTex: Differentiable Texturing for Architectural Proxy Models

📅 2025-09-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Architectural proxy models often suffer from texture detail loss due to geometric simplification. Method: This paper proposes a method to automatically generate high-fidelity texture maps from unstructured, registered RGB images by introducing a differentiable rendering framework. It establishes explicit, optimizable correspondences between UV texels and multi-view image pixels, and jointly optimizes blending weights to enforce photometric consistency, perspective correctness, and cross-view texture coherence. Contribution/Results: To our knowledge, this is the first work to apply differentiable rendering to texel-level texture synthesis for architectural proxy models, supporting unstructured input and end-to-end optimization. Experiments demonstrate that the method faithfully recovers color and geometric details from original dense reconstructions across diverse building structures and capture conditions. The resulting textures are visually seamless and high-fidelity, significantly enhancing model realism and generalization robustness.

Technology Category

Application Category

📝 Abstract
Simplified proxy models are commonly used to represent architectural structures, reducing storage requirements and enabling real-time rendering. However, the geometric simplifications inherent in proxies result in a loss of fine color and geometric details, making it essential for textures to compensate for the loss. Preserving the rich texture information from the original dense architectural reconstructions remains a daunting task, particularly when working with unordered RGB photographs. We propose an automated method for generating realistic texture maps for architectural proxy models at the texel level from an unordered collection of registered photographs. Our approach establishes correspondences between texels on a UV map and pixels in the input images, with each texel's color computed as a weighted blend of associated pixel values. Using differentiable rendering, we optimize blending parameters to ensure photometric and perspective consistency, while maintaining seamless texture coherence. Experimental results demonstrate the effectiveness and robustness of our method across diverse architectural models and varying photographic conditions, enabling the creation of high-quality textures that preserve visual fidelity and structural detail.
Problem

Research questions and friction points this paper is trying to address.

Generates realistic textures for architectural proxy models from photos
Optimizes texture blending using differentiable rendering for visual consistency
Preserves visual fidelity in simplified models despite geometric detail loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generates texel-level textures from unordered photographs
Computes texel colors via weighted blending of pixels
Optimizes blending parameters using differentiable rendering
🔎 Similar Papers
No similar papers found.