Fairness Dynamics in Digital Economy Platforms with Biased Ratings

2026-02-18Multiagent Systems

Multiagent SystemsComputers and Society
AI summary

The authors study how online platforms use rating systems to build trust between service providers and users, but these ratings can be biased against marginalized groups. They created a model to understand how platforms can either worsen or reduce this discrimination when deciding whom to promote. Their findings show there is a trade-off: promoting highly rated providers helps users but can hurt marginalized providers who face biased ratings. However, the authors suggest that adjusting search results to include more diverse providers can reduce unfairness without much impact on user satisfaction. They also find that this approach works even without exact knowledge of rating biases.

digital services economyonline platformsrating systemsbiasmarginalized groupsevolutionary game theoryreputation systemsdiscriminationrecommender systemsfairness
Authors
J. Martin Smit, Fernando P. Santos
Abstract
The digital services economy consists of online platforms that facilitate interactions between service providers and consumers. This ecosystem is characterized by short-term, often one-off, transactions between parties that have no prior familiarity. To establish trust among users, platforms employ rating systems which allow users to report on the quality of their previous interactions. However, while arguably crucial for these platforms to function, rating systems can perpetuate negative biases against marginalised groups. This paper investigates how to design platforms around biased reputation systems, reducing discrimination while maintaining incentives for all service providers to offer high quality service for users. We introduce an evolutionary game theoretical model to study how digital platforms can perpetuate or counteract rating-based discrimination. We focus on the platforms' decisions to promote service providers who have high reputations or who belong to a specific protected group. Our results demonstrate a fundamental trade-off between user experience and fairness: promoting highly-rated providers benefits users, but lowers the demand for marginalised providers against which the ratings are biased. Our results also provide evidence that intervening by tuning the demographics of the search results is a highly effective way of reducing unfairness while minimally impacting users. Furthermore, we show that even when precise measurements on the level of rating bias affecting marginalised service providers is unavailable, there is still potential to improve upon a recommender system which ignores protected characteristics. Altogether, our model highlights the benefits of proactive anti-discrimination design in systems where ratings are used to promote cooperative behaviour.