The Burden of Interactive Alignment with Inconsistent Preferences

📅 2025-10-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
User preference inconsistency—arising from short-term impulsive behaviors interfering with long-term interest expression—hampers accurate preference modeling by recommendation algorithms. Method: We propose a cognitive-alignment framework grounded in dual-system theory (System 1: impulsive; System 2: deliberative) and formulated as a multi-leader–single-follower Stackelberg game between users and the algorithm. Contribution/Results: We introduce the novel concept of “alignment cost,” proving the existence of a critical prediction horizon: during this period, deliberative users can steer algorithmic convergence, whereas myopic users are instead shaped by the algorithm’s objective. We further demonstrate that minimal conscious interventions—e.g., a single deliberate click—significantly reduce alignment cost and shorten the minimum alignment time. Theoretical analysis reveals a dynamic equilibrium between user rational intervention and algorithmic responsiveness, yielding a verifiable, cognition-aware algorithmic design principle for enhancing long-term interest elicitation.

Technology Category

Application Category

📝 Abstract
From media platforms to chatbots, algorithms shape how people interact, learn, and discover information. Such interactions between users and an algorithm often unfold over multiple steps, during which strategic users can guide the algorithm to better align with their true interests by selectively engaging with content. However, users frequently exhibit inconsistent preferences: they may spend considerable time on content that offers little long-term value, inadvertently signaling that such content is desirable. Focusing on the user side, this raises a key question: what does it take for such users to align the algorithm with their true interests? To investigate these dynamics, we model the user's decision process as split between a rational system 2 that decides whether to engage and an impulsive system 1 that determines how long engagement lasts. We then study a multi-leader, single-follower extensive Stackelberg game, where users, specifically system 2, lead by committing to engagement strategies and the algorithm best-responds based on observed interactions. We define the burden of alignment as the minimum horizon over which users must optimize to effectively steer the algorithm. We show that a critical horizon exists: users who are sufficiently foresighted can achieve alignment, while those who are not are instead aligned to the algorithm's objective. This critical horizon can be long, imposing a substantial burden. However, even a small, costly signal (e.g., an extra click) can significantly reduce it. Overall, our framework explains how users with inconsistent preferences can align an engagement-driven algorithm with their interests in a Stackelberg equilibrium, highlighting both the challenges and potential remedies for achieving alignment.
Problem

Research questions and friction points this paper is trying to address.

Modeling user decision process with dual rational and impulsive systems
Defining alignment burden as minimum optimization horizon for users
Analyzing how costly signals reduce critical horizon for algorithm alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model user decision as dual rational-impulsive systems
Analyze alignment via multi-leader Stackelberg game framework
Reduce alignment burden with small costly user signals
🔎 Similar Papers
No similar papers found.