🤖 AI Summary
To address the limited expressiveness, computational instability, and popularity bias inherent in Euclidean-space modeling for recommender systems, this paper proposes a hyperbolic representation learning framework. Methodologically, it (1) reformulates the hyperbolic distance function to better capture user–item–context ternary relations; (2) introduces a Riemannian manifold-based triplet loss that incorporates pairwise interaction terms, jointly optimizing the geometric relationships among users, preferred items, and non-preferred items on the hyperbolic hypersphere; and (3) employs Riemannian optimization to ensure training stability. Extensive experiments on multiple benchmark datasets demonstrate that the proposed method significantly outperforms state-of-the-art Euclidean and hyperbolic baselines. It achieves consistent improvements in standard metrics (e.g., Recall@K) while effectively mitigating popularity bias—thereby enhancing recommendation diversity and personalization.
📝 Abstract
Recent studies have demonstrated the potential of hyperbolic geometry for capturing complex patterns from interaction data in recommender systems. In this work, we introduce a novel hyperbolic recommendation model that uses geometrical insights to improve representation learning and increase computational stability at the same time. We reformulate the notion of hyperbolic distances to unlock additional representation capacity over conventional Euclidean space and learn more expressive user and item representations. To better capture user-items interactions, we construct a triplet loss that models ternary relations between users and their corresponding preferred and nonpreferred choices through a mix of pairwise interaction terms driven by the geometry of data. Our hyperbolic approach not only outperforms existing Euclidean and hyperbolic models but also reduces popularity bias, leading to more diverse and personalized recommendations.