🤖 AI Summary
To address low thrust control accuracy, high physical experimentation costs, and inefficient reinforcement learning (RL) training for fin-inspired soft underwater robots under complex hydrodynamic conditions, this paper proposes an adaptive control framework integrating a deep neural network (DNN) surrogate model with RL. Methodologically, a lightweight DNN surrogate replaces computationally expensive fluid simulations and physical experiments; a grid-switching mechanism dynamically selects submodels tailored to distinct force-magnitude regimes; and closed-loop policy optimization is achieved via proximal policy optimization (PPO) or soft actor-critic (SAC). The key contribution lies in the first integration of DNN-based surrogate modeling, grid-wise adaptive switching, and real-time soft actuator control. Experiments on a physical soft-fin platform demonstrate ±0.05 N thrust tracking accuracy, sub-50 ms response latency, and over 10× improvement in RL training efficiency.
📝 Abstract
This study presents a novel framework for precise force control of fin-actuated underwater robots by integrating a deep neural network (DNN)-based surrogate model with reinforcement learning (RL). To address the complex interactions with the underwater environment and the high experimental costs, a DNN surrogate model acts as a simulator for enabling efficient training for the RL agent. Additionally, grid-switching control is applied to select optimized models for specific force reference ranges, improving control accuracy and stability. Experimental results show that the RL agent, trained in the surrogate simulation, generates complex thrust motions and achieves precise control of a real soft fin actuator. This approach provides an efficient control solution for fin-actuated robots in challenging underwater environments.