π€ AI Summary
This study addresses how user-rating-based recommendation mechanisms on digital platforms can exacerbate discrimination against service providers from marginalized groups due to implicit biases. The authors develop an evolutionary game-theoretic model to capture the platformβs trade-off between promoting highly rated individuals and ensuring fair exposure for protected groups. Their analysis reveals a fundamental tension between user experience and group fairness. To mitigate this without requiring precise quantification of bias, they propose an intervention strategy that moderately increases the visibility of protected groups by adjusting the demographic composition of search results. Theoretical analysis and simulations demonstrate that this approach substantially reduces unfairness while imposing negligible costs on user experience, outperforming baseline systems that ignore group attributes.
π Abstract
The digital services economy consists of online platforms that facilitate interactions between service providers and consumers. This ecosystem is characterized by short-term, often one-off, transactions between parties that have no prior familiarity. To establish trust among users, platforms employ rating systems which allow users to report on the quality of their previous interactions. However, while arguably crucial for these platforms to function, rating systems can perpetuate negative biases against marginalised groups. This paper investigates how to design platforms around biased reputation systems, reducing discrimination while maintaining incentives for all service providers to offer high quality service for users. We introduce an evolutionary game theoretical model to study how digital platforms can perpetuate or counteract rating-based discrimination. We focus on the platforms' decisions to promote service providers who have high reputations or who belong to a specific protected group. Our results demonstrate a fundamental trade-off between user experience and fairness: promoting highly-rated providers benefits users, but lowers the demand for marginalised providers against which the ratings are biased. Our results also provide evidence that intervening by tuning the demographics of the search results is a highly effective way of reducing unfairness while minimally impacting users. Furthermore, we show that even when precise measurements on the level of rating bias affecting marginalised service providers is unavailable, there is still potential to improve upon a recommender system which ignores protected characteristics. Altogether, our model highlights the benefits of proactive anti-discrimination design in systems where ratings are used to promote cooperative behaviour.