An Analysis of Safety Guarantees in Multi-Task Bayesian Optimization

📅 2025-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses multi-task safe optimization of expensive black-box functions, aiming to balance global optimization and high-probability satisfaction of safety constraints in high-dimensional, computationally intensive settings. Method: We propose a novel Multi-Task Safe Bayesian Optimization (MT-SafeBO) framework. It introduces the first multi-task robust uniform error bound that jointly quantifies both model uncertainty and cross-task transfer error. The framework integrates multi-task Gaussian process modeling, safety-aware trust-region adaptation, and a conservative acquisition function grounded in this error bound. Contribution/Results: MT-SafeBO overcomes the sample-efficiency bottleneck of single-task safe BO. On benchmark functions and control tasks, it reduces function evaluations by 40–60% compared to state-of-the-art safe BO methods, while rigorously guaranteeing that the probability of satisfying safety constraints remains no lower than a user-specified threshold.

Technology Category

Application Category

📝 Abstract
In many practical scenarios of black box optimization, the objective function is subject to constraints that must be satisfied to avoid undesirable outcomes. Such constraints are typically unknown and must be learned during optimization. Safe Bayesian optimization aims to find the global optimum while ensuring that the constraints are satisfied with high probability. However, it is often sample-inefficient due to the small initial feasible set, which requires expansion by evaluating the objective or constraint functions, limiting its applicability to low-dimensional or inexpensive problems. To enhance sample efficiency, additional information from cheap simulations can be leveraged, albeit at the cost of safeness guarantees. This paper introduces a novel safe multi-task Bayesian optimization algorithm that integrates multiple tasks while maintaining high-probability safety. We derive robust uniform error bounds for the multi-task case and demonstrate the effectiveness of the approach on benchmark functions and a control problem. Our results show a significant improvement in sample efficiency, making the proposed method well-suited for expensive-to-evaluate functions.
Problem

Research questions and friction points this paper is trying to address.

Ensures safety in multi-task Bayesian optimization
Improves sample efficiency for expensive functions
Maintains high-probability safety with robust error bounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Integrates multiple tasks for optimization efficiency
Maintains high-probability safety during optimization
Uses robust uniform error bounds for safety
🔎 Similar Papers
No similar papers found.
J
Jannis O. Luebsen
Institute of Control Systems at Hamburg University of Technology, Hamburg, Germany
Annika Eichler
Annika Eichler
DESY