Vector preference-based contextual bandits under distributional shifts

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper studies the multi-objective contextual bandit problem under distributional shift, where rewards are vector-valued and user preferences are encoded by a given preference cone. To handle dynamic environments, we propose a preference-cone-based vector ranking mechanism, coupled with adaptive discretization and optimistic elimination to enable real-time adaptation to unknown drifts. We introduce a Pareto-frontier distance to define preference-aware regret and establish a unified theoretical framework, yielding tight regret upper bounds under both slow and abrupt drift assumptions. Crucially, our bound recovers existing optimal results in the no-drift or single-objective settings and smoothly scales with dimensionality and drift magnitude. Experiments demonstrate the effectiveness and robustness of our approach in dynamic multi-objective environments.

Technology Category

Application Category

📝 Abstract
We consider contextual bandit learning under distribution shift when reward vectors are ordered according to a given preference cone. We propose an adaptive-discretization and optimistic elimination based policy that self-tunes to the underlying distribution shift. To measure the performance of this policy, we introduce the notion of preference-based regret which measures the performance of a policy in terms of distance between Pareto fronts. We study the performance of this policy by establishing upper bounds on its regret under various assumptions on the nature of distribution shift. Our regret bounds generalize known results for the existing case of no distribution shift and vectorial reward settings, and scale gracefully with problem parameters in presence of distribution shifts.
Problem

Research questions and friction points this paper is trying to address.

Learning contextual bandits under distributional shifts
Proposing adaptive policy with optimistic elimination
Measuring performance via preference-based regret bounds
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive-discretization optimistic elimination policy
Self-tuning to underlying distribution shifts
Preference-based regret measuring Pareto front distance
🔎 Similar Papers
No similar papers found.