๐ค AI Summary
To address the challenge of ensuring fairness in recommender systems lacking explicit sensitive attributes, this paper proposes LLMFOSAโa novel framework for fair recommendation without sensitive labels. First, it employs a multi-Persona large language model to implicitly infer usersโ latent sensitive information, circumventing reliance on explicit annotations. Second, it introduces consensus-driven confusion-aware representation learning, which decouples sensitive dimensions from recommendation representations via mutual information minimization. This work pioneers two key innovations: (1) multi-role collaborative reasoning for sensitive attribute inference, and (2) unsupervised confusion modeling for fairness-aware representation learning. Crucially, LLMFOSA achieves substantial fairness improvements without compromising recommendation accuracy: on two public benchmarks, it reduces statistical parity difference (SPD) by 37.2% and equal opportunity difference (EO) by 41.5%.
๐ Abstract
Despite the success of recommender systems in alleviating information overload, fairness issues have raised concerns in recent years, potentially leading to unequal treatment for certain user groups. While efforts have been made to improve recommendation fairness, they often assume that users' sensitive attributes are available during model training. However, collecting sensitive information can be difficult, especially on platforms that involve no personal information disclosure. Therefore, we aim to improve recommendation fairness without any access to sensitive attributes. However, this is a non-trivial task because uncovering latent sensitive patterns from complicated user behaviors without explicit sensitive attributes can be difficult. Consequently, suboptimal estimates of sensitive distributions can hinder the fairness training process. To address these challenges, leveraging the remarkable reasoning abilities of Large Language Models (LLMs), we propose a novel LLM-enhanced framework for Fair recommendation withOut Sensitive Attributes (LLMFOSA). A Multi-Persona Sensitive Information Inference module employs LLMs with distinct personas that mimic diverse human perceptions to infer and distill sensitive information. Furthermore, a Confusion-Aware Sensitive Representation Learning module incorporates inference results and rationales to develop robust sensitive representations, considering the mislabeling confusion and collective consensus among agents. The model is then optimized by a formulated mutual information objective. Extensive experiments on two public datasets validate the effectiveness of LLMFOSA in improving fairness.