🤖 AI Summary
To address the robust localization challenge for RoboCup Adult-Size humanoid soccer robots under constrained stereo vision and IMU measurements, varying illumination, dynamic occlusions, gait-induced motion noise, and spurious feature correspondences, this paper proposes CLAP: a geometric-clustering-guided multimodal fusion framework that synergistically integrates particle filtering and extended Kalman filtering. Leveraging the natural clustering of correct state estimates in the feature observation space, CLAP achieves globally consistent and highly robust pose estimation without relying on symmetry assumptions or predefined outlier rejection heuristics—effectively suppressing both false matches and systematic biases. Evaluated in RoboCup 2024 competition scenarios, CLAP significantly outperforms conventional landmark-based localization methods, enabling reliable long-range shooting and precise defensive maneuvers. Its localization stability and noise resilience rank among the best reported at the competition.
📝 Abstract
In this paper, we present our localization method called CLAP, Clustering to Localize Across $n$ Possibilities, which helped us win the RoboCup 2024 adult-sized autonomous humanoid soccer competition. Competition rules limited our sensor suite to stereo vision and an inertial sensor, similar to humans. In addition, our robot had to deal with varying lighting conditions, dynamic feature occlusions, noise from high-impact stepping, and mistaken features from bystanders and neighboring fields. Therefore, we needed an accurate, and most importantly robust localization algorithm that would be the foundation for our path-planning and game-strategy algorithms. CLAP achieves these requirements by clustering estimated states of our robot from pairs of field features to localize its global position and orientation. Correct state estimates naturally cluster together, while incorrect estimates spread apart, making CLAP resilient to noise and incorrect inputs. CLAP is paired with a particle filter and an extended Kalman filter to improve consistency and smoothness. Tests of CLAP with other landmark-based localization methods showed similar accuracy. However, tests with increased false positive feature detection showed that CLAP outperformed other methods in terms of robustness with very little divergence and velocity jumps. Our localization performed well in competition, allowing our robot to shoot faraway goals and narrowly defend our goal.