Towards Learning Scalable Agile Dynamic Motion Planning for Robosoccer Teams with Policy Optimization

📅 2025-02-08
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In fast-changing continuous environments—such as RoboCup soccer—heterogeneous multi-agent dynamic obstacle avoidance and navigation remain challenging. Classical planners (e.g., RRT*, A*) incur prohibitive re-planning overhead, while mainstream learning-based approaches suffer from discretized state/action representations, homogeneous agent assumptions, and static environment modeling, compromising real-time performance, trajectory smoothness, and scalability. Method: We propose the first end-to-end neural motion planner for continuous spaces, integrating deep reinforcement learning with policy optimization to enable heterogeneous agent coordination, online re-planning, and distributed action decoupling—without environment discretization or homogeneity constraints. Contribution/Results: Our approach achieves millisecond-level response latency. Evaluated in a full-scale 11v11 RoboCup simulation, it demonstrates high-success-rate collision-free navigation and robust team coordination, significantly advancing state-of-the-art in real-time responsiveness, trajectory smoothness, and scalability to large agent populations.

Technology Category

Application Category

📝 Abstract
In fast-paced, ever-changing environments, dynamic Motion Planning for Multi-Agent Systems in the presence of obstacles is a universal and unsolved problem. Be it from path planning around obstacles to the movement of robotic arms, or in planning navigation of robot teams in settings such as Robosoccer, dynamic motion planning is needed to avoid collisions while reaching the targeted destination when multiple agents occupy the same area. In continuous domains where the world changes quickly, existing classical Motion Planning algorithms such as RRT* and A* become computationally expensive to rerun at every time step. Many variations of classical and well-formulated non-learning path-planning methods have been proposed to solve this universal problem but fall short due to their limitations of speed, smoothness, optimally, etc. Deep Learning models overcome their challenges due to their ability to adapt to varying environments based on past experience. However, current learning motion planning models use discretized environments, do not account for heterogeneous agents or replanning, and build up to improve the classical motion planners' efficiency, leading to issues with scalability. To prevent collisions between heterogenous team members and collision to obstacles while trying to reach the target location, we present a learning-based dynamic navigation model and show our model working on a simple environment in the concept of a simple Robosoccer Game.
Problem

Research questions and friction points this paper is trying to address.

Dynamic motion planning in multi-agent systems
Learning-based navigation for heterogeneous agents
Scalable agile motion in Robosoccer environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Scalable Agile Motion Planning
Policy Optimization for Multi-Agent Systems
Learning-based Dynamic Navigation Model
🔎 Similar Papers
No similar papers found.