๐ค AI Summary
Large language models (LLMs) in recommender systems face two critical challenges: inherent bias leading to unfair recommendations, and dimensionality collapse during alignment of side information with collaborative signals, which impairs user preference modeling. To address these, we propose the Counterfactual LLM-based Recommendation framework (CLLMR). CLLMR introduces a novel spectral-domain side-information encoder that implicitly integrates structural patterns from the historical interaction graph, and incorporates a counterfactual reasoning mechanism to disentangle LLM-inherent biasesโenabling causal-level co-optimization of fairness and representation diversity. Our method jointly leverages spectral graph encoding, causal embedding, and a bias-correcting loss function. Extensive experiments on multiple benchmark datasets demonstrate that CLLMR consistently outperforms state-of-the-art methods, achieving significant gains in Recall@10 and NDCG@10. Moreover, it effectively mitigates dimensionality collapse and enhances discriminative capability for user preferences.
๐ Abstract
The rapid development of Large Language Models (LLMs) creates new opportunities for recommender systems, especially by exploiting the side information (e.g., descriptions and analyses of items) generated by these models. However, aligning this side information with collaborative information from historical interactions poses significant challenges. The inherent biases within LLMs can skew recommendations, resulting in distorted and potentially unfair user experiences. On the other hand, propensity bias causes side information to be aligned in such a way that it often tends to represent all inputs in a low-dimensional subspace, leading to a phenomenon known as dimensional collapse, which severely restricts the recommender system's ability to capture user preferences and behaviours. To address these issues, we introduce a novel framework named Counterfactual LLM Recommendation (CLLMR). Specifically, we propose a spectrum-based side information encoder that implicitly embeds structural information from historical interactions into the side information representation, thereby circumventing the risk of dimension collapse. Furthermore, our CLLMR approach explores the causal relationships inherent in LLM-based recommender systems. By leveraging counterfactual inference, we counteract the biases introduced by LLMs. Extensive experiments demonstrate that our CLLMR approach consistently enhances the performance of various recommender models.