🤖 AI Summary
To address the challenge of modeling heterogeneous spatiotemporal modalities—such as RGB images and point clouds—in geospatial applications, where significant structural and semantic discrepancies hinder effective cross-modal integration, this paper proposes a ten-modality spatiotemporal foundation model for geospatial understanding. We introduce the novel “Language-as-Reference-Framework” (LaRF) paradigm, which synergistically combines modality-bridging mechanisms with modality-specific encoders, integrating language-space alignment, task-adaptive prompting, and multimodal large language models (MLLMs). The model substantially enhances cross-modal interpretability and generalization: it achieves zero-shot performance on RGB and point cloud few-shot classification tasks, outperforming baselines by up to 41.82% without any task-specific training. Moreover, it is the first model to enable unified representation learning and efficient cross-modal transfer across ten distinct spatiotemporal modalities.
📝 Abstract
Leveraging multimodal data is an inherent requirement for comprehending geographic objects. However, due to the high heterogeneity in structure and semantics among various spatio-temporal modalities, the joint interpretation of multimodal spatio-temporal data has long been an extremely challenging problem. The primary challenge resides in striking a trade-off between the cohesion and autonomy of diverse modalities. This trade-off becomes progressively nonlinear as the number of modalities expands. Inspired by the human cognitive system and linguistic philosophy, where perceptual signals from the five senses converge into language, we introduce the Language as Reference Framework (LaRF), a fundamental principle for constructing a multimodal unified model. Building upon this, we propose AllSpark, a multimodal spatio-temporal general artificial intelligence model. Our model integrates ten different modalities into a unified framework. To achieve modal cohesion, AllSpark introduces a modal bridge and multimodal large language model (LLM) to map diverse modal features into the language feature space. To maintain modality autonomy, AllSpark uses modality-specific encoders to extract the tokens of various spatio-temporal modalities. Finally, observing a gap between the model's interpretability and downstream tasks, we designed modality-specific prompts and task heads, enhancing the model's generalization capability across specific tasks. Experiments indicate that the incorporation of language enables AllSpark to excel in few-shot classification tasks for RGB and point cloud modalities without additional training, surpassing baseline performance by up to 41.82%. The source code is available at https://github.com/GeoX-Lab/AllSpark.