🤖 AI Summary
To address the challenge of autonomous coral sampling in reef conservation, this paper proposes a multi-agent reinforcement learning (MARL)-based control framework for underwater robots. Methodologically, it integrates a high-fidelity digital twin simulation environment built upon a general-purpose game engine, coupled with software-in-the-loop (SIL), hardware-in-the-loop (HIL), and real-time underwater motion capture to enable synchronized virtual–physical modeling. Crucially, it introduces a zero-shot simulation-to-reality (Sim-to-Real) transfer strategy, enabling direct deployment of deep reinforcement learning controllers on physical platforms without real-world fine-tuning. Experiments demonstrate that the trained AI controller successfully executes precise coral sample collection on an operational underwater platform, validating the framework’s effectiveness and generalization capability in complex, dynamic aquatic environments. This work establishes a scalable, robust paradigm for cooperative multi-agent control in automated marine ecological monitoring.
📝 Abstract
This paper presents a reinforcement learning (RL) environment for developing an autonomous underwater robotic coral sampling agent, a crucial coral reef conservation and research task. Using software-in-the-loop (SIL) and hardware-in-the-loop (HIL), an RL-trained artificial intelligence (AI) controller is developed using a digital twin (DT) in simulation and subsequently verified in physical experiments. An underwater motion capture (MOCAP) system provides real-time 3D position and orientation feedback during verification testing for precise synchronization between the digital and physical domains. A key novelty of this approach is the combined use of a general-purpose game engine for simulation, deep RL, and real-time underwater motion capture for an effective zero-shot sim-to-real strategy.