Shared Autonomy through LLMs and Reinforcement Learning for Applications to Ship Hull Inspections

📅 2025-09-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Maritime operations face high risk and strong uncertainty, particularly in hull inspection tasks requiring reliable human–robot collaboration. Method: This study proposes a shared-autonomy human–robot framework integrating large language models (LLMs) and reinforcement learning (RL) within a behavior-tree-based modular task manager: LLMs enable intuitive high-level intent input and semantic parsing; RL drives adaptive multi-agent collaborative decision-making; and behavior trees ensure interpretability and execution reliability. The system adopts a human-in-the-loop multi-agent architecture supporting dynamic intent perception and real-time coordination. Contribution/Results: Evaluated in simulation and lake-like environments, the framework significantly reduces operator cognitive load while enhancing task transparency, robustness, and environmental adaptability. It establishes a scalable technical paradigm for human–robot coexistence in complex marine operations.

Technology Category

Application Category

📝 Abstract
Shared autonomy is a promising paradigm in robotic systems, particularly within the maritime domain, where complex, high-risk, and uncertain environments necessitate effective human-robot collaboration. This paper investigates the interaction of three complementary approaches to advance shared autonomy in heterogeneous marine robotic fleets: (i) the integration of Large Language Models (LLMs) to facilitate intuitive high-level task specification and support hull inspection missions, (ii) the implementation of human-in-the-loop interaction frameworks in multi-agent settings to enable adaptive and intent-aware coordination, and (iii) the development of a modular Mission Manager based on Behavior Trees to provide interpretable and flexible mission control. Preliminary results from simulation and real-world lake-like environments demonstrate the potential of this multi-layered architecture to reduce operator cognitive load, enhance transparency, and improve adaptive behaviour alignment with human intent. Ongoing work focuses on fully integrating these components, refining coordination mechanisms, and validating the system in operational port scenarios. This study contributes to establishing a modular and scalable foundation for trustworthy, human-collaborative autonomy in safety-critical maritime robotics applications.
Problem

Research questions and friction points this paper is trying to address.

Integrating LLMs for intuitive ship hull inspection task specification
Implementing human-in-the-loop frameworks for adaptive multi-agent coordination
Developing modular mission control using Behavior Trees for interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs for intuitive task specification
Human-in-the-loop multi-agent coordination
Modular Mission Manager with Behavior Trees
🔎 Similar Papers
No similar papers found.
C
Cristiano Caissutti
Dept. of Information Engineering, University of Pisa, Italy
P
Paolo Marinelli
Dept. of Information Engineering, University of Pisa, Italy
E
Estelle Gerbier
Dept. of Information Engineering, University of Pisa, Italy
A
Andrea Munafo'
Dept. of Information Engineering, University of Pisa, Italy
E
Ehsan Khorrambakht
Dept. of Information Engineering, University of Pisa, Italy
Andrea Caiti
Andrea Caiti
Università di Pisa
automationocean engineeringmarine roboticsunderwater roboticsunderwater acoustics