ThermoAct:Thermal-Aware Vision-Language-Action Models for Robotic Perception and Decision-Making

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work proposes the first Vision-Language-Action (VLA) framework that integrates thermal imaging, visible-light vision, and natural language instructions to enhance safety and efficiency in complex human-robot collaboration scenarios. Existing robotic systems predominantly rely on visual perception alone, often failing to balance operational effectiveness with risk awareness. By leveraging a vision-language model to interpret natural language commands and decompose them into executable subtasks, the proposed approach incorporates thermal sensing to enrich understanding of the physical environment and anticipate potential hazards. Notably, this is the first VLA architecture to embed thermal imaging for proactive safety-aware perception and decision-making. Real-world experiments demonstrate that the framework significantly improves both task success rates and operational safety compared to state-of-the-art purely visual systems.

Technology Category

Application Category

📝 Abstract
In recent human-robot collaboration environments, there is a growing focus on integrating diverse sensor data beyond visual information to enable safer and more intelligent task execution. Although thermal data can be crucial for enhancing robot safety and operational efficiency, its integration has been relatively overlooked in prior research. This paper proposes a novel Vision-Language-Action (VLA) framework that incorporates thermal information for robot task execution. The proposed system leverages a Vision-Language Model (VLM) as a high-level planner to interpret complex natural language commands and decompose them into simpler sub-tasks. This approach facilitates efficient data collection and robust reasoning for complex operations. Unlike conventional methods that rely solely on visual data, our approach integrates thermal information, enabling the robot to perceive physical properties and proactively ensure environmental safety. Experimental results from real-world task scenarios validate the feasibility of our proposed framework, suggesting its potential to enhance task success rates and safety compared to existing vision-based systems.
Problem

Research questions and friction points this paper is trying to address.

thermal-aware
robotic perception
vision-language-action
multimodal sensing
human-robot collaboration
Innovation

Methods, ideas, or system contributions that make the work stand out.

Thermal-aware perception
Vision-Language-Action model
Human-robot collaboration
Multimodal sensor fusion
Robot safety
🔎 Similar Papers
2024-03-22IEEE transactions on circuits and systems for video technology (Print)Citations: 2