Scale-Invariant Learning-to-Rank

๐Ÿ“… 2024-10-02
๐Ÿ›๏ธ ACM Conference on Recommender Systems
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the degradation in ranking performance caused by inconsistent feature scaling between training and online inference in Learning-to-Rank (LTR), this paper proposes a mathematically rigorous deep-and-wide joint modeling paradigm that guarantees scale invariance. It is the first LTR method to achieve end-to-end, normalization-free theoretical scale invariance, enabling automatic adaptation to arbitrary feature rescalings throughout both training and inferenceโ€”thereby eliminating reliance on standardization or batch normalization. The approach integrates co-designed architecture, explicit scale-invariance constraints, and a perturbation-robustness evaluation framework. Under simulated scale-mismatch scenarios, it significantly outperforms baselines: Ranking Loss decreases by 12.7%, NDCG@10 improves by 8.3%, with zero additional inference latency.

Technology Category

Application Category

๐Ÿ“ Abstract
At Expedia, learning-to-rank (LTR) models plays a key role on our website in sorting and presenting information more relevant to users, such as search filters, property rooms, amenities, and images. A major challenge in deploying these models is ensuring consistent feature scaling between training and production data, as discrepancies can lead to unreliable rankings when deployed. Normalization techniques like feature standardization and batch normalization could address these issues but are impractical in production due to latency impacts and the difficulty of distributed real-time inference. To address consistent feature scaling issue, we introduce a scale-invariant LTR framework which combines a deep and a wide neural network to mathematically guarantee scale-invariance in the model at both training and prediction time. We evaluate our framework in simulated real-world scenarios with injected feature scale issues by perturbing the test set at prediction time, and show that even with inconsistent train-test scaling, using framework achieves better performance than without.
Problem

Research questions and friction points this paper is trying to address.

Learning to Rank
Data Inconsistency
Model Degradation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deep and Wide Neural Networks
Learning to Rank (LTR) Framework
Robustness to Data Variability
๐Ÿ”Ž Similar Papers
No similar papers found.
Alessio Petrozziello
Alessio Petrozziello
Expedia Group, London, United Kingdom
C
Christian Sommeregger
Expedia Group, London, United Kingdom
Y
Ye-Sheen Lim
Expedia Group, London, United Kingdom