🤖 AI Summary
Safety-critical control in decentralized multi-robot systems is challenged by uncertainty in black-box trajectory predictions. Method: This paper proposes a prediction-error-driven adaptive safety control framework that integrates conformal decision theory with control barrier functions (CBFs) for the first time, dynamically adjusting CBF constraint strength to jointly optimize safety and task performance. Theoretically, we derive a time-averaged upper bound on the deviation between predicted and true safety constraints, ensuring tightness of the safety boundary. Methodologically, the framework unifies black-box trajectory predictors, robust controllers, and monotonic constraint analysis. Results: Evaluated on the Stanford Drone dataset for multi-agent navigation, our approach achieves significant improvements in collision avoidance rate and task completion rate, empirically validating both theoretical safety guarantees and practical robustness under real-world deployment conditions.
📝 Abstract
We address the challenge of safe control in decentralized multi-agent robotic settings, where agents use uncertain black-box models to predict other agents' trajectories. We use the recently proposed conformal decision theory to adapt the restrictiveness of control barrier functions-based safety constraints based on observed prediction errors. We use these constraints to synthesize controllers that balance between the objectives of safety and task accomplishment, despite the prediction errors. We provide an upper bound on the average over time of the value of a monotonic function of the difference between the safety constraint based on the predicted trajectories and the constraint based on the ground truth ones. We validate our theory through experimental results showing the performance of our controllers when navigating a robot in the multi-agent scenes in the Stanford Drone Dataset.