Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Fairness-aware Personalized Ranking Recommendation via Adversarial Learning

Mar 14, 2021
Ziwei Zhu, Jianling Wang, James Caverlee

Recommendation algorithms typically build models based on historical user-item interactions (e.g., clicks, likes, or ratings) to provide a personalized ranked list of items. These interactions are often distributed unevenly over different groups of items due to varying user preferences. However, we show that recommendation algorithms can inherit or even amplify this imbalanced distribution, leading to unfair recommendations to item groups. Concretely, we formalize the concepts of ranking-based statistical parity and equal opportunity as two measures of fairness in personalized ranking recommendation for item groups. Then, we empirically show that one of the most widely adopted algorithms -- Bayesian Personalized Ranking -- produces unfair recommendations, which motivates our effort to propose the novel fairness-aware personalized ranking model. The debiased model is able to improve the two proposed fairness metrics while preserving recommendation performance. Experiments on three public datasets show strong fairness improvement of the proposed model versus state-of-the-art alternatives. This is paper is an extended and reorganized version of our SIGIR 2020~\cite{zhu2020measuring} paper. In this paper, we re-frame the studied problem as `item recommendation fairness' in personalized ranking recommendation systems, and provide more details about the training process of the proposed model and details of experiment setup.

* 11 pages, 11 figures, an extended version of a conference published paper 
  

RecoGym: A Reinforcement Learning Environment for the problem of Product Recommendation in Online Advertising

Sep 14, 2018
David Rohde, Stephen Bonner, Travis Dunlop, Flavian Vasile, Alexandros Karatzoglou

Recommender Systems are becoming ubiquitous in many settings and take many forms, from product recommendation in e-commerce stores, to query suggestions in search engines, to friend recommendation in social networks. Current research directions which are largely based upon supervised learning from historical data appear to be showing diminishing returns with a lot of practitioners report a discrepancy between improvements in offline metrics for supervised learning and the online performance of the newly proposed models. One possible reason is that we are using the wrong paradigm: when looking at the long-term cycle of collecting historical performance data, creating a new version of the recommendation model, A/B testing it and then rolling it out. We see that there a lot of commonalities with the reinforcement learning (RL) setup, where the agent observes the environment and acts upon it in order to change its state towards better states (states with higher rewards). To this end we introduce RecoGym, an RL environment for recommendation, which is defined by a model of user traffic patterns on e-commerce and the users response to recommendations on the publisher websites. We believe that this is an important step forward for the field of recommendation systems research, that could open up an avenue of collaboration between the recommender systems and reinforcement learning communities and lead to better alignment between offline and online performance metrics.

* Accepted at the REVEAL workshop at the Twelfth ACM Conference on Recommender Systems (RecSys '18), October 2--7, 2018, Vancouver, BC, Canada 
  

The emergence of Explainability of Intelligent Systems: Delivering Explainable and Personalised Recommendations for Energy Efficiency

Oct 26, 2020
Christos Sardianos, Iraklis Varlamis, Christos Chronis, George Dimitrakopoulos, Abdullah Alsalemi, Yassine Himeur, Faycal Bensaali, Abbes Amira

The recent advances in artificial intelligence namely in machine learning and deep learning, have boosted the performance of intelligent systems in several ways. This gave rise to human expectations, but also created the need for a deeper understanding of how intelligent systems think and decide. The concept of explainability appeared, in the extent of explaining the internal system mechanics in human terms. Recommendation systems are intelligent systems that support human decision making, and as such, they have to be explainable in order to increase user trust and improve the acceptance of recommendations. In this work, we focus on a context-aware recommendation system for energy efficiency and develop a mechanism for explainable and persuasive recommendations, which are personalized to user preferences and habits. The persuasive facts either emphasize on the economical saving prospects (Econ) or on a positive ecological impact (Eco) and explanations provide the reason for recommending an energy saving action. Based on a study conducted using a Telegram bot, different scenarios have been validated with actual data and human feedback. Current results show a total increase of 19\% on the recommendation acceptance ratio when both economical and ecological persuasive facts are employed. This revolutionary approach on recommendation systems, demonstrates how intelligent recommendations can effectively encourage energy saving behavior.

* International Journal of Intelligent Systems, 2020 
* 19 pages, 8 figures, 1 table 
  

Balancing Consumer and Business Value of Recommender Systems: A Simulation-based Analysis

Mar 10, 2022
Nada Ghanem, Stephan Leitner, Dietmar Jannach

Automated recommendations can nowadays be found on many online platforms, and such recommendations can create substantial value for consumers and providers. Often, however, not all recommendable items have the same profit margin, and providers might thus be tempted to promote items that maximize their profit. In the short run, consumers might accept non-optimal recommendations, but they may lose their trust in the long run. Ultimately, this leads to the problem of designing balanced recommendation strategies, which consider both consumer and provider value and lead to sustained business success. This work proposes a simulation framework based on Agent-based Modeling designed to help providers explore longitudinal dynamics of different recommendation strategies. In our model, consumer agents receive recommendations from providers, and the perceived quality of the recommendations influences the consumers' trust over time. In addition, we consider network effects where positive and negative experiences are shared with others on social media. Simulations with our framework show that balanced strategies that consider both stakeholders indeed lead to stable consumer trust and sustained profitability. We also find that social media can reinforce phenomena like the loss of trust in the case of negative experiences. To ensure reproducibility and foster future research, we publicly share our flexible simulation framework.

* 32 pages, 9 figures 
  

Improving Outfit Recommendation with Co-supervision of Fashion Generation

Aug 24, 2019
Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, Maarten de Rijke

The task of fashion recommendation includes two main challenges: visual understanding and visual matching. Visual understanding aims to extract effective visual features. Visual matching aims to model a human notion of compatibility to compute a match between fashion items. Most previous studies rely on recommendation loss alone to guide visual understanding and matching. Although the features captured by these methods describe basic characteristics (e.g., color, texture, shape) of the input items, they are not directly related to the visual signals of the output items (to be recommended). This is problematic because the aesthetic characteristics (e.g., style, design), based on which we can directly infer the output items, are lacking. Features are learned under the recommendation loss alone, where the supervision signal is simply whether the given two items are matched or not. To address this problem, we propose a neural co-supervision learning framework, called the FAshion Recommendation Machine (FARM). FARM improves visual understanding by incorporating the supervision of generation loss, which we hypothesize to be able to better encode aesthetic information. FARM enhances visual matching by introducing a novel layer-to-layer matching mechanism to fuse aesthetic information more effectively, and meanwhile avoiding paying too much attention to the generation quality and ignoring the recommendation performance. Extensive experiments on two publicly available datasets show that FARM outperforms state-of-the-art models on outfit recommendation, in terms of AUC and MRR. Detailed analyses of generated and recommended items demonstrate that FARM can encode better features and generate high quality images as references to improve recommendation performance.

  

Fair Multi-Stakeholder News Recommender System with Hypergraph ranking

Dec 01, 2020
Alireza Gharahighehi, Celine Vens, Konstantinos Pliakos

Recommender systems are typically designed to fulfill end user needs. However, in some domains the users are not the only stakeholders in the system. For instance, in a news aggregator website users, authors, magazines as well as the platform itself are potential stakeholders. Most of the collaborative filtering recommender systems suffer from popularity bias. Therefore, if the recommender system only considers users' preferences, presumably it over-represents popular providers and under-represents less popular providers. To address this issue one should consider other stakeholders in the generated ranked lists. In this paper we demonstrate that hypergraph learning has the natural capability of handling a multi-stakeholder recommendation task. A hypergraph can model high order relations between different types of objects and therefore is naturally inclined to generate recommendation lists considering multiple stakeholders. We form the recommendations in time-wise rounds and learn to adapt the weights of stakeholders to increase the coverage of low-covered stakeholders over time. The results show that the proposed approach counters popularity bias and produces fairer recommendations with respect to authors in two news datasets, at a low cost in precision.

  

User-controllable Recommendation Against Filter Bubbles

Apr 29, 2022
Wenjie Wang, Fuli Feng, Liqiang Nie, Tat-Seng Chua

Recommender systems usually face the issue of filter bubbles: overrecommending homogeneous items based on user features and historical interactions. Filter bubbles will grow along the feedback loop and inadvertently narrow user interests. Existing work usually mitigates filter bubbles by incorporating objectives apart from accuracy such as diversity and fairness. However, they typically sacrifice accuracy, hurting model fidelity and user experience. Worse still, users have to passively accept the recommendation strategy and influence the system in an inefficient manner with high latency, e.g., keeping providing feedback (e.g., like and dislike) until the system recognizes the user intention. This work proposes a new recommender prototype called UserControllable Recommender System (UCRS), which enables users to actively control the mitigation of filter bubbles. Functionally, 1) UCRS can alert users if they are deeply stuck in filter bubbles. 2) UCRS supports four kinds of control commands for users to mitigate the bubbles at different granularities. 3) UCRS can respond to the controls and adjust the recommendations on the fly. The key to adjusting lies in blocking the effect of out-of-date user representations on recommendations, which contains historical information inconsistent with the control commands. As such, we develop a causality-enhanced User-Controllable Inference (UCI) framework, which can quickly revise the recommendations based on user controls in the inference stage and utilize counterfactual inference to mitigate the effect of out-of-date user representations. Experiments on three datasets validate that the UCI framework can effectively recommend more desired items based on user controls, showing promising performance w.r.t. both accuracy and diversity.

* Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022) 
* Accepted by SIGIR 2022 
  

Rewiring What-to-Watch-Next Recommendations to Reduce Radicalization Pathways

Feb 01, 2022
Francesco Fabbri, Yanhao Wang, Francesco Bonchi, Carlos Castillo, Michael Mathioudakis

Recommender systems typically suggest to users content similar to what they consumed in the past. If a user happens to be exposed to strongly polarized content, she might subsequently receive recommendations which may steer her towards more and more radicalized content, eventually being trapped in what we call a "radicalization pathway". In this paper, we study the problem of mitigating radicalization pathways using a graph-based approach. Specifically, we model the set of recommendations of a "what-to-watch-next" recommender as a d-regular directed graph where nodes correspond to content items, links to recommendations, and paths to possible user sessions. We measure the "segregation" score of a node representing radicalized content as the expected length of a random walk from that node to any node representing non-radicalized content. High segregation scores are associated to larger chances to get users trapped in radicalization pathways. Hence, we define the problem of reducing the prevalence of radicalization pathways by selecting a small number of edges to "rewire", so to minimize the maximum of segregation scores among all radicalized nodes, while maintaining the relevance of the recommendations. We prove that the problem of finding the optimal set of recommendations to rewire is NP-hard and NP-hard to approximate within any factor. Therefore, we turn our attention to heuristics, and propose an efficient yet effective greedy algorithm based on the absorbing random walk theory. Our experiments on real-world datasets in the context of video and news recommendations confirm the effectiveness of our proposal.

* To appear in the Web conference 2022 (WWW '22) 
  

Leave No User Behind: Towards Improving the Utility of Recommender Systems for Non-mainstream Users

Feb 02, 2021
Roger Zhe Li, Julián Urbano, Alan Hanjalic

In a collaborative-filtering recommendation scenario, biases in the data will likely propagate in the learned recommendations. In this paper we focus on the so-called mainstream bias: the tendency of a recommender system to provide better recommendations to users who have a mainstream taste, as opposed to non-mainstream users. We propose NAECF, a conceptually simple but effective idea to address this bias. The idea consists of adding an autoencoder (AE) layer when learning user and item representations with text-based Convolutional Neural Networks. The AEs, one for the users and one for the items, serve as adversaries to the process of minimizing the rating prediction error when learning how to recommend. They enforce that the specific unique properties of all users and items are sufficiently well incorporated and preserved in the learned representations. These representations, extracted as the bottlenecks of the corresponding AEs, are expected to be less biased towards mainstream users, and to provide more balanced recommendation utility across all users. Our experimental results confirm these expectations, significantly improving the recommendations for non-mainstream users while maintaining the recommendation quality for mainstream users. Our results emphasize the importance of deploying extensive content-based features, such as online reviews, in order to better represent users and items to maximize the de-biasing effect.

* 9 pages, 6 figures, accepted to WSDM 2021 
  
<<
19
20
21
22
23
24
25
26
27
28
29
30
31
>>