Subjective Behaviors and Preferences in LLM: Language of Browsing

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper investigates the limitations of large language models (LLMs) in modeling users’ subjective browsing behavior, addressing three core questions: (1) whether smaller models better capture syntactically unconstrained “browsing language”; (2) whether a single-parameter model can adequately represent user heterogeneity; and (3) whether high average performance implies low individual-level variance and strong user-level alignment. To this end, we propose HeTLM—a clustering-aware heterogeneous training framework—featuring page-level tokenization, a lightweight model architecture, and user-clustering-driven heterogeneous parameter optimization. Experiments demonstrate that HeTLM significantly outperforms mainstream LLMs on browsing sequence modeling: it improves aggregate performance while substantially reducing cross-user prediction variance, thereby achieving stronger individual behavioral consistency and preference alignment. These results reveal the critical advantage of the “small-and-specialized” paradigm for user behavior modeling.

Technology Category

Application Category

📝 Abstract
A Large Language Model (LLM) offers versatility across domains and tasks, purportedly benefiting users with a wide variety of behaviors and preferences. We question this perception about an LLM when users have inherently subjective behaviors and preferences, as seen in their ubiquitous and idiosyncratic browsing of websites or apps. The sequential behavior logs of pages, thus generated, form something akin to each user's self-constructed "language", albeit without the structure and grammar imbued in natural languages. We ask: (i) Can a small LM represent the "language of browsing" better than a large LM? (ii) Can an LM with a single set of parameters (or, single LM) adequately capture myriad users' heterogeneous, subjective behaviors and preferences? (iii) Can a single LM with high average performance, yield low variance in performance to make alignment good at user level? We introduce clusterwise LM training, HeTLM (Heterogeneity aware Training of Language Model), appropriate for subjective behaviors. We find that (i) a small LM trained using a page-level tokenizer outperforms large pretrained or finetuned LMs; (ii) HeTLM with heterogeneous cluster specific set of parameters outperforms a single LM of the same family, controlling for the number of parameters; and (iii) a higher mean and a lower variance in generation ensues, implying improved alignment.
Problem

Research questions and friction points this paper is trying to address.

Can small LMs better represent browsing behavior language?
Can single LMs capture diverse user preferences adequately?
Can high-performance LMs achieve low variance for user alignment?
Innovation

Methods, ideas, or system contributions that make the work stand out.

Small LM with page-level tokenizer outperforms large LMs
HeTLM uses cluster-specific parameters for heterogeneity
Achieves higher mean and lower variance in generation
🔎 Similar Papers
No similar papers found.
S
Sai Sundaresan
Adobe Research
H
Harshita Chopra
Adobe Research
A
Atanu R. Sinha
Adobe Research
Koustava Goswami
Koustava Goswami
Research Scientist 2 @ Adobe Research
Natural Language ProcessingLanguage ModelMultimodal Learning
Nagasai Saketh Naidu
Nagasai Saketh Naidu
UG Student, Indian Institute of Technology Bombay
NLPAgentsLLMs
R
Raghav Karan
Adobe Research
N
N Anushka
Adobe Research