The Moon's Many Faces: A Single Unified Transformer for Multimodal Lunar Reconstruction

📅 2025-05-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the joint problem of lunar surface 3D reconstruction and reflectance parameter estimation. We propose the first multimodal learning framework tailored for planetary science. Methodologically, we design a unified Transformer encoder-decoder architecture that models grayscale images, digital elevation models (DEMs), surface normal maps, and albedo maps as four distinct modalities. Modality-specific embeddings and a shared latent space enable end-to-end cross-modal generation from arbitrary input modalities to any target modality. Crucially, we decouple geometric (height) and photometric (reflectance) information for the first time in planetary science, yielding physically interpretable cross-modal representations. Experiments on real lunar data demonstrate high-fidelity joint prediction of DEMs and albedo maps, validate physical consistency across all four modalities, and establish a novel paradigm for photometric normalization and image registration in planetary remote sensing.

Technology Category

Application Category

📝 Abstract
Multimodal learning is an emerging research topic across multiple disciplines but has rarely been applied to planetary science. In this contribution, we identify that reflectance parameter estimation and image-based 3D reconstruction of lunar images can be formulated as a multimodal learning problem. We propose a single, unified transformer architecture trained to learn shared representations between multiple sources like grayscale images, digital elevation models, surface normals, and albedo maps. The architecture supports flexible translation from any input modality to any target modality. Predicting DEMs and albedo maps from grayscale images simultaneously solves the task of 3D reconstruction of planetary surfaces and disentangles photometric parameters and height information. Our results demonstrate that our foundation model learns physically plausible relations across these four modalities. Adding more input modalities in the future will enable tasks such as photometric normalization and co-registration.
Problem

Research questions and friction points this paper is trying to address.

Unified transformer for multimodal lunar reconstruction
Estimating reflectance and 3D structure from lunar images
Learning shared representations across diverse lunar data modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

Unified transformer for multimodal lunar reconstruction
Shared representations across multiple lunar data sources
Flexible modality translation for 3D reconstruction
🔎 Similar Papers
No similar papers found.
Tom Sander
Tom Sander
Meta FAIR & Ecole polytechnique
Privacy Preserving Machine Learning
M
M. Tenthoff
Image Analysis Group, TU Dortmund University, Germany
K
K. Wohlfarth
Image Analysis Group, TU Dortmund University, Germany
C
Christian Wohler
Image Analysis Group, TU Dortmund University, Germany