Visual Language Models as Operator Agents in the Space Domain

📅 2025-01-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of cross-modal semantic understanding in autonomous decision-making and control for space missions. We propose the first framework systematically integrating vision-language models (VLMs) into a closed-loop space operations architecture. Methodologically, we unify multimodal prompt engineering, the Kerbal Space Program differential game (KSPDG) simulation environment, and an embedded robotic vision system to establish an end-to-end “perception–understanding–decision–execution” pipeline. Our approach overcomes the dual limitations of large language models (LLMs)—lacking visual grounding—and purely vision-based methods—lacking semantic reasoning. Experiments demonstrate that VLM-based agents achieve orbital maneuvering performance comparable to domain-specific algorithms and non-multimodal LLMs in KSPDG. Furthermore, on a physical robotic platform, the system successfully identifies satellite components and localizes anomalies, constituting the first experimental validation of VLM-driven autonomous on-orbit inspection and fault diagnosis.

Technology Category

Application Category

📝 Abstract
This paper explores the application of Vision-Language Models (VLMs) as operator agents in the space domain, focusing on both software and hardware operational paradigms. Building on advances in Large Language Models (LLMs) and their multimodal extensions, we investigate how VLMs can enhance autonomous control and decision-making in space missions. In the software context, we employ VLMs within the Kerbal Space Program Differential Games (KSPDG) simulation environment, enabling the agent to interpret visual screenshots of the graphical user interface to perform complex orbital maneuvers. In the hardware context, we integrate VLMs with robotic systems equipped with cameras to inspect and diagnose physical space objects, such as satellites. Our results demonstrate that VLMs can effectively process visual and textual data to generate contextually appropriate actions, competing with traditional methods and non-multimodal LLMs in simulation tasks, and showing promise in real-world applications.
Problem

Research questions and friction points this paper is trying to address.

Visual Language Models
Space Missions
Decision and Control
Innovation

Methods, ideas, or system contributions that make the work stand out.

Visual Language Models
Space Missions
Image Understanding
🔎 Similar Papers
No similar papers found.
A
Alejandro Carrasco
Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
M
Marco Nedungadi
Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
E
Enrico M. Zucchelli
Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
Amit Jain
Amit Jain
Associate Professor, Orthopaedic Surgery & Neurosurgery, Johns Hopkins
Adult & Peds Spine SurgeryMinimally InvasiveHealth EconomicsSurgical RoboticsScoliosis
V
Victor Rodríguez-Fernández
Universidad Politécnica de Madrid, Madrid 28038, Spain
Richard Linares
Richard Linares
Associate Professor, Dept. of Aeronautics & Astronautics, MIT
AstrodynamicsArtificial IntelligenceSpace Situational AwarenessControlGuidance