๐ค AI Summary
This paper studies the problem of learning optimal automated bidding strategies for advertisers participating in multi-platform auctions under budget and ROI constraints. Addressing the setting where value and cost functions are unknown, we propose a learning-augmented, query-driven optimization framework grounded in the diminishing returns assumption, integrating adaptive binary search with error-sensitive analysis. When predictions are accurate, the algorithm requires only $O(m)$ queries; under prediction errors, it degrades gracefully to $O(m log(mn) log n)$, yielding an overall query complexity of $O(m log(meta) log eta)$โsignificantly improving over naive approaches. Our key contribution is the first bidding strategy learning method that simultaneously achieves high prediction accuracy and strong robustness with low query complexity, backed by rigorous theoretical guarantees and demonstrated practical efficiency.
๐ Abstract
We study the problem of finding the optimal bidding strategy for an advertiser in a multi-platform auction setting. The competition on a platform is captured by a value and a cost function, mapping bidding strategies to value and cost respectively. We assume a diminishing returns property, whereby the marginal cost is increasing in value. The advertiser uses an autobidder that selects a bidding strategy for each platform, aiming to maximize total value subject to budget and return-on-spend constraint. The advertiser has no prior information and learns about the value and cost functions by querying a platform with a specific bidding strategy. Our goal is to design algorithms that find the optimal bidding strategy with a small number of queries. We first present an algorithm that requires (O(m log (mn) log n)) queries, where $m$ is the number of platforms and $n$ is the number of possible bidding strategies in each platform. Moreover, we adopt the learning-augmented framework and propose an algorithm that utilizes a (possibly erroneous) prediction of the optimal bidding strategy. We provide a $O(m log (meta) log eta)$ query-complexity bound on our algorithm as a function of the prediction error $eta$. This guarantee gracefully degrades to (O(m log (mn) log n)). This achieves a ``best-of-both-worlds'' scenario: (O(m)) queries when given a correct prediction, and (O(m log (mn) log n)) even for an arbitrary incorrect prediction.