Length-Adaptive Interest Network for Balancing Long and Short Sequence Modeling in CTR Prediction

📅 2026-01-27
📈 Citations: 1
Influential: 1
📄 PDF
🤖 AI Summary
This work addresses the performance degradation of short-sequence users in existing click-through rate (CTR) prediction models when handling mixed-length user behavior sequences, a problem exacerbated by attention polarization and imbalanced training sequence length distributions. To mitigate this, the authors propose LAIN, a novel framework that explicitly conditions sequence modeling on sequence length. LAIN introduces three lightweight, plug-and-play components—spectral length encoder, length-conditioned prompt, and length-modulated attention—to dynamically adapt representation learning for both short and long sequences. Without compromising performance on long sequences, LAIN uniformly enhances user representations across varying sequence lengths. Experiments on three real-world datasets demonstrate consistent improvements, achieving an average AUC gain of 1.15% and a log loss reduction of 2.25%, while remaining compatible with various mainstream CTR backbone architectures.

Technology Category

Application Category

📝 Abstract
User behavior sequences in modern recommendation systems exhibit significant length heterogeneity, ranging from sparse short-term interactions to rich long-term histories. While longer sequences provide more context, we observe that increasing the maximum input sequence length in existing CTR models paradoxically degrades performance for short-sequence users due to attention polarization and length imbalance in training data. To address this, we propose LAIN(Length-Adaptive Interest Network), a plug-and-play framework that explicitly incorporates sequence length as a conditioning signal to balance long- and short-sequence modeling. LAIN consists of three lightweight components: a Spectral Length Encoder that maps length into continuous representations, Length-Conditioned Prompting that injects global contextual cues into both long- and short-term behavior branches, and Length-Modulated Attention that adaptively adjusts attention sharpness based on sequence length. Extensive experiments on three real-world benchmarks across five strong CTR backbones show that LAIN consistently improves overall performance, achieving up to 1.15% AUC gain and 2.25% log loss reduction. Notably, our method significantly improves accuracy for short-sequence users without sacrificing longsequence effectiveness. Our work offers a general, efficient, and deployable solution to mitigate length-induced bias in sequential recommendation.
Problem

Research questions and friction points this paper is trying to address.

CTR prediction
sequence length heterogeneity
attention polarization
length imbalance
short-sequence users
Innovation

Methods, ideas, or system contributions that make the work stand out.

Length-Adaptive Interest Network
Sequence Length Heterogeneity
Length-Conditioned Prompting
Length-Modulated Attention
CTR Prediction
🔎 Similar Papers
No similar papers found.
Zhicheng Zhang
Zhicheng Zhang
Carnegie Mellon University
Reinforcement LearningExplainable RL
Zhaocheng Du
Zhaocheng Du
Huawei Noah Ark's Lab
Machine LearningRecommendation System
J
Jieming Zhu
Huawei Noah’s Ark Lab
Jiwei Tang
Jiwei Tang
Tsinghua University
Natural Language ProcessingLarge Language Model
F
Fengyuan Lu
School of Artificial Intelligence and Science, Nanjing University
J
Jiaheng Wang
Hong Kong University of Science and Technology
S
Song-Li Wu
Shenzhen International Graduate School, Tsinghua University
Q
Qianhui Zhu
Shenzhen International Graduate School, Tsinghua University
Jingyu Li
Jingyu Li
University of Science and Technology of China
Deep LearningComputer VisionNatural Language Processing
H
Hai-Tao Zheng
Shenzhen International Graduate School, Tsinghua University; Peng Cheng Laboratory
Zhenhua Dong
Zhenhua Dong
Noah's ark lab, Huawei Technologies Co., Ltd.
Recommender systemcausal inferencecountrfactual learningtrustworthy AImachine learning