🤖 AI Summary
To address the critical challenge of frequent user noncompliance—i.e., rejecting recommendations and reverting to habitual choices—in physical-world recommender systems, this paper formally introduces the “Nah Bandit” problem: users may actively reject a recommendation (“nah”) and instead select an anchor-based default option reflecting their intrinsic preferences. We propose an interaction model incorporating anchoring bias and develop the EWC algorithm, which leverages dual feedback—both from recommended and non-recommended items—to accelerate preference learning. Our approach integrates contextual bandits, hierarchical clustering, and anchoring-parameterized reward estimation, and establishes a theoretical regret bound of $O(Nsqrt{T log K} + NT)$. Theoretical analysis shows superior short-horizon performance over LinUCB. Empirical evaluation demonstrates that EWC significantly outperforms supervised learning and conventional contextual bandit baselines in both recommendation accuracy and convergence speed.
📝 Abstract
Recommendation systems now pervade the digital world, ranging from advertising to entertainment. However, it remains challenging to implement effective recommendation systems in the physical world, such as in mobility or health. This work focuses on a key challenge: in the physical world, it is often easy for the user to opt out of taking any recommendation if they are not to her liking, and to fall back to her baseline behavior. It is thus crucial in cyber-physical recommendation systems to operate with an interaction model that is aware of such user behavior, lest the user abandon the recommendations altogether. This paper thus introduces the Nah Bandit, a tongue-in-cheek reference to describe a Bandit problem where users can say `nah' to the recommendation and opt for their preferred option instead. As such, this problem lies in between a typical bandit setup and supervised learning. We model the user non-compliance by parameterizing an anchoring effect of recommendations on users. We then propose the Expert with Clustering (EWC) algorithm, a hierarchical approach that incorporates feedback from both recommended and non-recommended options to accelerate user preference learning. In a recommendation scenario with $N$ users, $T$ rounds per user, and $K$ clusters, EWC achieves a regret bound of $O(Nsqrt{Tlog K} + NT)$, achieving superior theoretical performance in the short term compared to LinUCB algorithm. Experimental results also highlight that EWC outperforms both supervised learning and traditional contextual bandit approaches. This advancement reveals that effective use of non-compliance feedback can accelerate preference learning and improve recommendation accuracy. This work lays the foundation for future research in Nah Bandit, providing a robust framework for more effective recommendation systems.