🤖 AI Summary
To address the low computational efficiency of large-scale long-horizon trajectory optimization under parallel computing, this paper proposes a distributed trajectory optimization framework based on Consensus ADMM (CADMM). The method introduces CADMM to trajectory optimization for the first time, achieving O(1) per-iteration time complexity. Closed-form solutions are derived for linear and quadratic constraints, while general inequality constraints are handled via efficient numerical solvers. The architecture supports GPU acceleration and segment-wise parallelization. Experiments demonstrate over 10× speedup on hundred-segment trajectories and stable scalability to thousand-segment problems. Moreover, the approach significantly outperforms state-of-the-art methods in both convergence speed and trajectory smoothness.
📝 Abstract
Optimization has been widely used to generate smooth trajectories for motion planning. However, existing trajectory optimization methods show weakness when dealing with large-scale long trajectories. Recent advances in parallel computing have accelerated optimization in some fields, but how to efficiently solve trajectory optimization via parallelism remains an open question. In this paper, we propose a novel trajectory optimization framework based on the Consensus Alternating Direction Method of Multipliers (CADMM) algorithm, which decomposes the trajectory into multiple segments and solves the subproblems in parallel. The proposed framework reduces the time complexity to O(1) per iteration to the number of segments, compared to O(N) of the state-of-the-art (SOTA) approaches. Furthermore, we introduce a closed-form solution that integrates convex linear and quadratic constraints to speed up the optimization, and we also present numerical solutions for general inequality constraints. A series of simulations and experiments demonstrate that our approach outperforms the SOTA approach in terms of efficiency and smoothness. Especially for a large-scale trajectory, with one hundred segments, achieving over a tenfold speedup. To fully explore the potential of our algorithm on modern parallel computing architectures, we deploy our framework on a GPU and show high performance with thousands of segments.