Post-hoc Popularity Bias Correction in GNN-based Collaborative Filtering

📅 2025-10-14
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the amplification of popularity bias in GNN-based collaborative filtering caused by long-tailed interaction distributions, this paper proposes a post-hoc debiasing method that requires no retraining. Unlike existing approaches that modify loss functions or perform neighborhood weighting, our method explicitly models a “popularity direction” in the pre-trained GNN node embedding space and removes popularity-related components via orthogonal projection—thereby avoiding instability issues arising from early-stage training dynamics. The method is architecture- and training-agnostic, seamlessly integrating with diverse GNN-CF models without altering model structure or optimization procedures. Extensive experiments demonstrate that it consistently outperforms state-of-the-art debiasing methods across standard metrics—including Recall and NDCG—as well as personalization-aware fairness metrics, thereby improving both recommendation quality and fairness.

Technology Category

Application Category

📝 Abstract
User historical interaction data is the primary signal for learning user preferences in collaborative filtering (CF). However, the training data often exhibits a long-tailed distribution, where only a few items have the majority of interactions. CF models trained directly on such imbalanced data are prone to learning popularity bias, which reduces personalization and leads to suboptimal recommendation quality. Graph Neural Networks (GNNs), while effective for CF due to their message passing mechanism, can further propagate and amplify popularity bias through their aggregation process. Existing approaches typically address popularity bias by modifying training objectives but fail to directly counteract the bias propagated during GNN's neighborhood aggregation. Applying weights to interactions during aggregation can help alleviate this problem, yet it risks distorting model learning due to unstable node representations in the early stages of training. In this paper, we propose a Post-hoc Popularity Debiasing (PPD) method that corrects for popularity bias in GNN-based CF and operates directly on pre-trained embeddings without requiring retraining. By estimating interaction-level popularity and removing popularity components from node representations via a popularity direction vector, PPD reduces bias while preserving user preferences. Experimental results show that our method outperforms state-of-the-art approaches for popularity bias correction in GNN-based CF.
Problem

Research questions and friction points this paper is trying to address.

Correcting popularity bias in GNN-based collaborative filtering
Addressing biased node embeddings from imbalanced training data
Removing popularity components without model retraining
Innovation

Methods, ideas, or system contributions that make the work stand out.

Post-hoc debiasing corrects pre-trained GNN embeddings
Estimates popularity bias via interaction-level direction vector
Removes popularity components without model retraining
🔎 Similar Papers
No similar papers found.