Safe Decentralized Multi-Agent Control using Black-Box Predictors, Conformal Decision Policies, and Control Barrier Functions

📅 2024-09-27
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Safety-critical control in decentralized multi-robot systems is challenged by uncertainty in black-box trajectory predictions. Method: This paper proposes a prediction-error-driven adaptive safety control framework that integrates conformal decision theory with control barrier functions (CBFs) for the first time, dynamically adjusting CBF constraint strength to jointly optimize safety and task performance. Theoretically, we derive a time-averaged upper bound on the deviation between predicted and true safety constraints, ensuring tightness of the safety boundary. Methodologically, the framework unifies black-box trajectory predictors, robust controllers, and monotonic constraint analysis. Results: Evaluated on the Stanford Drone dataset for multi-agent navigation, our approach achieves significant improvements in collision avoidance rate and task completion rate, empirically validating both theoretical safety guarantees and practical robustness under real-world deployment conditions.

Technology Category

Application Category

📝 Abstract
We address the challenge of safe control in decentralized multi-agent robotic settings, where agents use uncertain black-box models to predict other agents' trajectories. We use the recently proposed conformal decision theory to adapt the restrictiveness of control barrier functions-based safety constraints based on observed prediction errors. We use these constraints to synthesize controllers that balance between the objectives of safety and task accomplishment, despite the prediction errors. We provide an upper bound on the average over time of the value of a monotonic function of the difference between the safety constraint based on the predicted trajectories and the constraint based on the ground truth ones. We validate our theory through experimental results showing the performance of our controllers when navigating a robot in the multi-agent scenes in the Stanford Drone Dataset.
Problem

Research questions and friction points this paper is trying to address.

Safe control in decentralized multi-agent robotic systems
Balancing safety and task completion with uncertain predictions
Upper bound on safety constraint deviations over time
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decentralized control with black-box predictors
Conformal decision policies for safety adaptation
Control barrier functions balancing safety and tasks
🔎 Similar Papers
No similar papers found.
S
Sacha Huriot
Computer Science and Engineering Department, Washington University in St. Louis
Hussein Sibai
Hussein Sibai
Washington University in St. Louis
Control TheoryFormal MethodsMachine LearningRobotics