Alert button
Picture for Masoud Mansoury

Masoud Mansoury

Alert button

Potential Factors Leading to Popularity Unfairness in Recommender Systems: A User-Centered Analysis

Oct 04, 2023
Masoud Mansoury, Finn Duijvestijn, Imane Mourabet

Figure 1 for Potential Factors Leading to Popularity Unfairness in Recommender Systems: A User-Centered Analysis
Figure 2 for Potential Factors Leading to Popularity Unfairness in Recommender Systems: A User-Centered Analysis
Figure 3 for Potential Factors Leading to Popularity Unfairness in Recommender Systems: A User-Centered Analysis
Figure 4 for Potential Factors Leading to Popularity Unfairness in Recommender Systems: A User-Centered Analysis

Popularity bias is a well-known issue in recommender systems where few popular items are over-represented in the input data, while majority of other less popular items are under-represented. This disparate representation often leads to bias in exposure given to the items in the recommendation results. Extensive research examined this bias from item perspective and attempted to mitigate it by enhancing the recommendation of less popular items. However, a recent research has revealed the impact of this bias on users. Users with different degree of tolerance toward popular items are not fairly served by the recommendation system: users interested in less popular items receive more popular items in their recommendations, while users interested in popular items are recommended what they want. This is mainly due to the popularity bias that popular items are over-recommended. In this paper, we aim at investigating the factors leading to this user-side unfairness of popularity bias in recommender systems. In particular, we investigate two factors: 1) the relationship between this unfairness and users' interest toward items' categories (e.g., movie genres), 2) the relationship between this unfairness and the diversity of the popularity group in users' profile (the degree to which the user is interested in items with different degree of popularity). Experiments on a movie recommendation dataset using multiple recommendation algorithms show that these two factors are significantly correlated with the degree of popularity unfairness in the recommendation results.

Viaarxiv icon

Predictive Uncertainty-based Bias Mitigation in Ranking

Sep 18, 2023
Maria Heuss, Daniel Cohen, Masoud Mansoury, Maarten de Rijke, Carsten Eickhoff

Figure 1 for Predictive Uncertainty-based Bias Mitigation in Ranking
Figure 2 for Predictive Uncertainty-based Bias Mitigation in Ranking
Figure 3 for Predictive Uncertainty-based Bias Mitigation in Ranking
Figure 4 for Predictive Uncertainty-based Bias Mitigation in Ranking

Societal biases that are contained in retrieved documents have received increased interest. Such biases, which are often prevalent in the training data and learned by the model, can cause societal harms, by misrepresenting certain groups, and by enforcing stereotypes. Mitigating such biases demands algorithms that balance the trade-off between maximized utility for the user with fairness objectives, which incentivize unbiased rankings. Prior work on bias mitigation often assumes that ranking scores, which correspond to the utility that a document holds for a user, can be accurately determined. In reality, there is always a degree of uncertainty in the estimate of expected document utility. This uncertainty can be approximated by viewing ranking models through a Bayesian perspective, where the standard deterministic score becomes a distribution. In this work, we investigate whether uncertainty estimates can be used to decrease the amount of bias in the ranked results, while minimizing loss in measured utility. We introduce a simple method that uses the uncertainty of the ranking scores for an uncertainty-aware, post hoc approach to bias mitigation. We compare our proposed method with existing baselines for bias mitigation with respect to the utility-fairness trade-off, the controllability of methods, and computational costs. We show that an uncertainty-based approach can provide an intuitive and flexible trade-off that outperforms all baselines without additional training requirements, allowing for the post hoc use of this approach on top of arbitrary retrieval models.

* CIKM 2023: 32nd ACM International Conference on Information and Knowledge Management  
Viaarxiv icon

Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach

Sep 11, 2023
Spyros Avlonitis, Dor Lavi, Masoud Mansoury, David Graus

Figure 1 for Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach
Figure 2 for Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach
Figure 3 for Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach
Figure 4 for Career Path Recommendations for Long-term Income Maximization: A Reinforcement Learning Approach

This study explores the potential of reinforcement learning algorithms to enhance career planning processes. Leveraging data from Randstad The Netherlands, the study simulates the Dutch job market and develops strategies to optimize employees' long-term income. By formulating career planning as a Markov Decision Process (MDP) and utilizing machine learning algorithms such as Sarsa, Q-Learning, and A2C, we learn optimal policies that recommend career paths with high-income occupations and industries. The results demonstrate significant improvements in employees' income trajectories, with RL models, particularly Q-Learning and Sarsa, achieving an average increase of 5% compared to observed career paths. The study acknowledges limitations, including narrow job filtering, simplifications in the environment formulation, and assumptions regarding employment continuity and zero application costs. Future research can explore additional objectives beyond income optimization and address these limitations to further enhance career planning processes.

* accepted for publication at RecSys in HR '23 (at the 17th ACM Conference on Recommender Systems) 
Viaarxiv icon

Fairness of Exposure in Dynamic Recommendation

Sep 05, 2023
Masoud Mansoury, Bamshad Mobasher

Figure 1 for Fairness of Exposure in Dynamic Recommendation
Figure 2 for Fairness of Exposure in Dynamic Recommendation
Figure 3 for Fairness of Exposure in Dynamic Recommendation
Figure 4 for Fairness of Exposure in Dynamic Recommendation

Exposure bias is a well-known issue in recommender systems where the exposure is not fairly distributed among items in the recommendation results. This is especially problematic when bias is amplified over time as a few items (e.g., popular ones) are repeatedly over-represented in recommendation lists and users' interactions with those items will amplify bias towards those items over time resulting in a feedback loop. This issue has been extensively studied in the literature in static recommendation environment where a single round of recommendation result is processed to improve the exposure fairness. However, less work has been done on addressing exposure bias in a dynamic recommendation setting where the system is operating over time, the recommendation model and the input data are dynamically updated with ongoing user feedback on recommended items at each round. In this paper, we study exposure bias in a dynamic recommendation setting. Our goal is to show that existing bias mitigation methods that are designed to operate in a static recommendation setting are unable to satisfy fairness of exposure for items in long run. In particular, we empirically study one of these methods and show that repeatedly applying this method fails to fairly distribute exposure among items in long run. To address this limitation, we show how this method can be adapted to effectively operate in a dynamic recommendation setting and achieve exposure fairness for items in long run. Experiments on a real-world dataset confirm that our solution is superior in achieving long-term exposure fairness for the items while maintaining the recommendation accuracy.

Viaarxiv icon

Exposure-Aware Recommendation using Contextual Bandits

Sep 04, 2022
Masoud Mansoury, Bamshad Mobasher, Herke van Hoof

Figure 1 for Exposure-Aware Recommendation using Contextual Bandits
Figure 2 for Exposure-Aware Recommendation using Contextual Bandits
Figure 3 for Exposure-Aware Recommendation using Contextual Bandits
Figure 4 for Exposure-Aware Recommendation using Contextual Bandits

Exposure bias is a well-known issue in recommender systems where items and suppliers are not equally represented in the recommendation results. This is especially problematic when bias is amplified over time as a few items (e.g., popular ones) are repeatedly over-represented in recommendation lists and users' interactions with those items will amplify bias towards those items over time resulting in a feedback loop. This issue has been extensively studied in the literature on model-based or neighborhood-based recommendation algorithms, but less work has been done on online recommendation models, such as those based on top-K contextual bandits, where recommendation models are dynamically updated with ongoing user feedback. In this paper, we study exposure bias in a class of well-known contextual bandit algorithms known as Linear Cascading Bandits. We analyze these algorithms on their ability to handle exposure bias and provide a fair representation for items in the recommendation results. Our analysis reveals that these algorithms tend to amplify exposure disparity among items over time. In particular, we observe that these algorithms do not properly adapt to the feedback provided by the users and frequently recommend certain items even when those items are not selected by users. To mitigate this bias, we propose an Exposure-Aware (EA) reward model that updates the model parameters based on two factors: 1) user feedback (i.e., clicked or not), and 2) position of the item in the recommendation list. This way, the proposed model controls the utility assigned to items based on their exposure in the recommendation list. Extensive experiments on two real-world datasets using three contextual bandit algorithms show that the proposed reward model reduces exposure bias amplification in long run while maintaining the recommendation accuracy.

Viaarxiv icon

Understanding and Mitigating Multi-Sided Exposure Bias in Recommender Systems

Nov 10, 2021
Masoud Mansoury

Figure 1 for Understanding and Mitigating Multi-Sided Exposure Bias in Recommender Systems
Figure 2 for Understanding and Mitigating Multi-Sided Exposure Bias in Recommender Systems
Figure 3 for Understanding and Mitigating Multi-Sided Exposure Bias in Recommender Systems
Figure 4 for Understanding and Mitigating Multi-Sided Exposure Bias in Recommender Systems

Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. It is especially important in multi-sided recommendation platforms where it may be crucial to optimize utilities not just for the end user, but also for other actors such as item sellers or producers who desire a fair representation of their items. Existing solutions do not properly address various aspects of multi-sided fairness in recommendations as they may either solely have one-sided view (i.e. improving the fairness only for one side), or do not appropriately measure the fairness for each actor involved in the system. In this thesis, I aim at first investigating the impact of unfair recommendations on the system and how these unfair recommendations can negatively affect major actors in the system. Then, I seek to propose solutions to tackle the unfairness of recommendations. I propose a rating transformation technique that works as a pre-processing step before building the recommendation model to alleviate the inherent popularity bias in the input data and consequently to mitigate the exposure unfairness for items and suppliers in the recommendation lists. Also, as another solution, I propose a general graph-based solution that works as a post-processing approach after recommendation generation for mitigating the multi-sided exposure bias in the recommendation results. For evaluation, I introduce several metrics for measuring the exposure fairness for items and suppliers, and show that these metrics better capture the fairness properties in the recommendation results. I perform extensive experiments to evaluate the effectiveness of the proposed solutions. The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.

* Doctoral thesis 
Viaarxiv icon

Unbiased Cascade Bandits: Mitigating Exposure Bias in Online Learning to Rank Recommendation

Aug 07, 2021
Masoud Mansoury, Himan Abdollahpouri, Bamshad Mobasher, Mykola Pechenizkiy, Robin Burke, Milad Sabouri

Figure 1 for Unbiased Cascade Bandits: Mitigating Exposure Bias in Online Learning to Rank Recommendation
Figure 2 for Unbiased Cascade Bandits: Mitigating Exposure Bias in Online Learning to Rank Recommendation
Figure 3 for Unbiased Cascade Bandits: Mitigating Exposure Bias in Online Learning to Rank Recommendation

Exposure bias is a well-known issue in recommender systems where items and suppliers are not equally represented in the recommendation results. This is especially problematic when bias is amplified over time as a few popular items are repeatedly over-represented in recommendation lists. This phenomenon can be viewed as a recommendation feedback loop: the system repeatedly recommends certain items at different time points and interactions of users with those items will amplify bias towards those items over time. This issue has been extensively studied in the literature on model-based or neighborhood-based recommendation algorithms, but less work has been done on online recommendation models such as those based on multi-armed Bandit algorithms. In this paper, we study exposure bias in a class of well-known bandit algorithms known as Linear Cascade Bandits. We analyze these algorithms on their ability to handle exposure bias and provide a fair representation for items and suppliers in the recommendation results. Our analysis reveals that these algorithms fail to treat items and suppliers fairly and do not sufficiently explore the item space for each user. To mitigate this bias, we propose a discounting factor and incorporate it into these algorithms that controls the exposure of items at each time step. To show the effectiveness of the proposed discounting factor on mitigating exposure bias, we perform experiments on two datasets using three cascading bandit algorithms and our experimental results show that the proposed method improves the exposure fairness for items and suppliers.

Viaarxiv icon

A Graph-based Approach for Mitigating Multi-sided Exposure Bias in Recommender Systems

Jul 07, 2021
Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke

Figure 1 for A Graph-based Approach for Mitigating Multi-sided Exposure Bias in Recommender Systems
Figure 2 for A Graph-based Approach for Mitigating Multi-sided Exposure Bias in Recommender Systems
Figure 3 for A Graph-based Approach for Mitigating Multi-sided Exposure Bias in Recommender Systems
Figure 4 for A Graph-based Approach for Mitigating Multi-sided Exposure Bias in Recommender Systems

Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research. A specific form of fairness is supplier exposure fairness where the objective is to ensure equitable coverage of items across all suppliers in recommendations provided to users. This is especially important in multistakeholder recommendation scenarios where it may be important to optimize utilities not just for the end-user, but also for other stakeholders such as item sellers or producers who desire a fair representation of their items. This type of supplier fairness is sometimes accomplished by attempting to increasing aggregate diversity in order to mitigate popularity bias and to improve the coverage of long-tail items in recommendations. In this paper, we introduce FairMatch, a general graph-based algorithm that works as a post processing approach after recommendation generation to improve exposure fairness for items and suppliers. The algorithm iteratively adds high quality items that have low visibility or items from suppliers with low exposure to the users' final recommendation lists. A comprehensive set of experiments on two datasets and comparison with state-of-the-art baselines show that FairMatch, while significantly improves exposure fairness and aggregate diversity, maintains an acceptable level of relevance of the recommendations.

* arXiv admin note: substantial text overlap with arXiv:2005.01148 
Viaarxiv icon

User-centered Evaluation of Popularity Bias in Recommender Systems

Mar 10, 2021
Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher, Edward Malthouse

Figure 1 for User-centered Evaluation of Popularity Bias in Recommender Systems
Figure 2 for User-centered Evaluation of Popularity Bias in Recommender Systems
Figure 3 for User-centered Evaluation of Popularity Bias in Recommender Systems
Figure 4 for User-centered Evaluation of Popularity Bias in Recommender Systems

Recommendation and ranking systems are known to suffer from popularity bias; the tendency of the algorithm to favor a few popular items while under-representing the majority of other items. Prior research has examined various approaches for mitigating popularity bias and enhancing the recommendation of long-tail, less popular, items. The effectiveness of these approaches is often assessed using different metrics to evaluate the extent to which over-concentration on popular items is reduced. However, not much attention has been given to the user-centered evaluation of this bias; how different users with different levels of interest towards popular items are affected by such algorithms. In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users' perspective and we propose a new metric that can address these limitations. In addition, we present an effective approach that mitigates popularity bias from the user-centered point of view. Finally, we investigate several state-of-the-art approaches proposed in recent years to mitigate popularity bias and evaluate their performances using the existing metrics and also from the users' perspective. Our experimental results using two publicly-available datasets show that existing popularity bias mitigation techniques ignore the users' tolerance towards popular items. Our proposed user-centered method can tackle popularity bias effectively for different users while also improving the existing metrics.

* Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '21), June 21--25, 2021, Utrecht, Netherlands. arXiv admin note: text overlap with arXiv:2007.12230 
Viaarxiv icon

Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison

Aug 02, 2019
Masoud Mansoury, Bamshad Mobasher, Robin Burke, Mykola Pechenizkiy

Figure 1 for Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison
Figure 2 for Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison
Figure 3 for Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison
Figure 4 for Bias Disparity in Collaborative Recommendation: Algorithmic Evaluation and Comparison

Research on fairness in machine learning has been recently extended to recommender systems. One of the factors that may impact fairness is bias disparity, the degree to which a group's preferences on various item categories fail to be reflected in the recommendations they receive. In some cases biases in the original data may be amplified or reversed by the underlying recommendation algorithm. In this paper, we explore how different recommendation algorithms reflect the tradeoff between ranking quality and bias disparity. Our experiments include neighborhood-based, model-based, and trust-aware recommendation algorithms.

* Workshop on Recommendation in Multi-Stakeholder Environments (RMSE) at ACM RecSys 2019, Copenhagen, Denmark 
Viaarxiv icon