RAD-DPO: Robust Adaptive Denoising Direct Preference Optimization for Generative Retrieval in E-commerce

πŸ“… 2026-02-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of aligning complex user preferences in generative retrieval for e-commerce, where direct application of Direct Preference Optimization (DPO) to structured semantic IDs is hindered by prefix gradient conflicts, noise from pseudo-negative samples, and probability suppression of positive labels under multi-label settings. To overcome these limitations, the paper introduces three key techniques: token-level gradient decoupling to preserve hierarchical prefix structures, similarity-based dynamic reward weighting to mitigate label noise, and a hybrid loss combining multi-label global contrastive learning with global supervised fine-tuning to broaden positive sample coverage. The proposed approach effectively alleviates core constraints of DPO in structured generative retrieval, significantly improving ranking quality and training efficiency, as demonstrated by extensive offline evaluations and online A/B tests on a large-scale e-commerce platform.

Technology Category

Application Category

πŸ“ Abstract
Generative Retrieval (GR) has emerged as a powerful paradigm in e-commerce search, retrieving items via autoregressive decoding of Semantic IDs (SIDs). However, aligning GR with complex user preferences remains challenging. While Direct Preference Optimization (DPO) offers an efficient alignment solution, its direct application to structured SIDs suffers from three limitations: (i) it penalizes shared hierarchical prefixes, causing gradient conflicts; (ii) it is vulnerable to noisy pseudo-negatives from implicit feedback; and (iii) in multi-label queries with multiple relevant items, it exacerbates a probability"squeezing effect"among valid candidates. To address these issues, we propose RAD-DPO, which introduces token-level gradient detachment to protect prefix structures, similarity-based dynamic reward weighting to mitigate label noise, and a multi-label global contrastive objective integrated with global SFT loss to explicitly expand positive coverage. Extensive offline experiments and online A/B testing on a large-scale e-commerce platform demonstrate significant improvements in ranking quality and training efficiency.
Problem

Research questions and friction points this paper is trying to address.

Generative Retrieval
Direct Preference Optimization
Semantic IDs
Label Noise
Multi-label Queries
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generative Retrieval
Direct Preference Optimization
Token-level Gradient Detachment
Dynamic Reward Weighting
Multi-label Contrastive Learning
πŸ”Ž Similar Papers
No similar papers found.