Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Textual Explanations and Critiques in Recommendation Systems

May 15, 2022
Diego Antognini

Artificial intelligence and machine learning algorithms have become ubiquitous. Although they offer a wide range of benefits, their adoption in decision-critical fields is limited by their lack of interpretability, particularly with textual data. Moreover, with more data available than ever before, it has become increasingly important to explain automated predictions. Generally, users find it difficult to understand the underlying computational processes and interact with the models, especially when the models fail to generate the outcomes or explanations, or both, correctly. This problem highlights the growing need for users to better understand the models' inner workings and gain control over their actions. This dissertation focuses on two fundamental challenges of addressing this need. The first involves explanation generation: inferring high-quality explanations from text documents in a scalable and data-driven manner. The second challenge consists in making explanations actionable, and we refer to it as critiquing. This dissertation examines two important applications in natural language processing and recommendation tasks. Overall, we demonstrate that interpretability does not come at the cost of reduced performance in two consequential applications. Our framework is applicable to other fields as well. This dissertation presents an effective means of closing the gap between promise and practice in artificial intelligence.

* Ph.D. Thesis, Ecole Polytechnique F\'ed\'erale de Lausanne (EPFL). See https://infoscience.epfl.ch/record/292597 for the original version 

  Access Paper or Ask Questions

Towards Retrieval-based Conversational Recommendation

Sep 06, 2021
Ahtsham Manzoor, Dietmar Jannach

Conversational recommender systems have attracted immense attention recently. The most recent approaches rely on neural models trained on recorded dialogs between humans, implementing an end-to-end learning process. These systems are commonly designed to generate responses given the user's utterances in natural language. One main challenge is that these generated responses both have to be appropriate for the given dialog context and must be grammatically and semantically correct. An alternative to such generation-based approaches is to retrieve responses from pre-recorded dialog data and to adapt them if needed. Such retrieval-based approaches were successfully explored in the context of general conversational systems, but have received limited attention in recent years for CRS. In this work, we re-assess the potential of such approaches and design and evaluate a novel technique for response retrieval and ranking. A user study (N=90) revealed that the responses by our system were on average of higher quality than those of two recent generation-based systems. We furthermore found that the quality ranking of the two generation-based approaches is not aligned with the results from the literature, which points to open methodological questions. Overall, our research underlines that retrieval-based approaches should be considered an alternative or complement to language generation approaches.

* 29 pages, 5 figures, 7 tables 

  Access Paper or Ask Questions

Requirements Elicitation in Cognitive Service for Recommendation

Mar 29, 2022
Bolin Zhang, Zhiying Tu, Yunzhe Xu, Dianhui Chu, Xiaofei Xu

Nowadays, cognitive service provides more interactive way to understand users' requirements via human-machine conversation. In other words, it has to capture users' requirements from their utterance and respond them with the relevant and suitable service resources. To this end, two phases must be applied: I.Sequence planning and Real-time detection of user requirement, II.Service resource selection and Response generation. The existing works ignore the potential connection between these two phases. To model their connection, Two-Phase Requirement Elicitation Method is proposed. For the phase I, this paper proposes a user requirement elicitation framework (URef) to plan a potential requirement sequence grounded on user profile and personal knowledge base before the conversation. In addition, it can also predict user's true requirement and judge whether the requirement is completed based on the user's utterance during the conversation. For the phase II, this paper proposes a response generation model based on attention, SaRSNet. It can select the appropriate resource (i.e. knowledge triple) in line with the requirement predicted by URef, and then generates a suitable response for recommendation. The experimental results on the open dataset \emph{DuRecDial} have been significantly improved compared to the baseline, which proves the effectiveness of the proposed methods.


  Access Paper or Ask Questions

The Price of Privacy in Untrusted Recommendation Engines

Oct 27, 2014
Siddhartha Banerjee, Nidhi Hegde, Laurent Massoulié

Recent increase in online privacy concerns prompts the following question: can a recommender system be accurate if users do not entrust it with their private data? To answer this, we study the problem of learning item-clusters under local differential privacy, a powerful, formal notion of data privacy. We develop bounds on the sample-complexity of learning item-clusters from privatized user inputs. Significantly, our results identify a sample-complexity separation between learning in an information-rich and an information-scarce regime, thereby highlighting the interaction between privacy and the amount of information (ratings) available to each user. In the information-rich regime, where each user rates at least a constant fraction of items, a spectral clustering approach is shown to achieve a sample-complexity lower bound derived from a simple information-theoretic argument based on Fano's inequality. However, the information-scarce regime, where each user rates only a vanishing fraction of items, is found to require a fundamentally different approach both for lower bounds and algorithms. To this end, we develop new techniques for bounding mutual information under a notion of channel-mismatch, and also propose a new algorithm, MaxSense, and show that it achieves optimal sample-complexity in this setting. The techniques we develop for bounding mutual information may be of broader interest. To illustrate this, we show their applicability to $(i)$ learning based on 1-bit sketches, and $(ii)$ adaptive learning, where queries can be adapted based on answers to past queries.

* Preliminary version presented at the 50th Allerton Conference, 2012 

  Access Paper or Ask Questions

Layer-wise Relevance Propagation for Explainable Recommendations

Jul 17, 2018
Homanga Bharadhwaj

In this paper, we tackle the problem of explanations in a deep-learning based model for recommendations by leveraging the technique of layer-wise relevance propagation. We use a Deep Convolutional Neural Network to extract relevant features from the input images before identifying similarity between the images in feature space. Relationships between the images are identified by the model and layer-wise relevance propagation is used to infer pixel-level details of the images that may have significantly informed the model's choice. We evaluate our method on an Amazon products dataset and demonstrate the efficacy of our approach.

* Homanga Bharadhwaj. 2018. Layer-wise Relevance Propagation for Explainable Recommendations. In Proceedings of SIGIR 2018 Workshop on ExplainAble Recommendation and Search (EARS'18). ACM, New York, NY, USA 
* Accepted in Proceedings of the EARS Workshop at SIGIR 2018 

  Access Paper or Ask Questions

New Fairness Metrics for Recommendation that Embrace Differences

Dec 13, 2017
Sirui Yao, Bert Huang

We study fairness in collaborative-filtering recommender systems, which are sensitive to discrimination that exists in historical data. Biased data can lead collaborative filtering methods to make unfair predictions against minority groups of users. We identify the insufficiency of existing fairness metrics and propose four new metrics that address different forms of unfairness. These fairness metrics can be optimized by adding fairness terms to the learning objective. Experiments on synthetic and real data show that our new metrics can better measure fairness than the baseline, and that the fairness objectives effectively help reduce unfairness.

* Presented as a poster at the 2017 Workshop on Fairness, Accountability, and Transparency in Machine Learning (FAT/ML 2017) 

  Access Paper or Ask Questions

Formal Methods for the Informal Engineer: Workshop Recommendations

Apr 01, 2021
Gopal Sarma, James Koppel, Gregory Malecha, Patrick Schultz, Eric Drexler, Ramana Kumar, Cody Roux, Philip Zucker

Formal Methods for the Informal Engineer (FMIE) was a workshop held at the Broad Institute of MIT and Harvard in 2021 to explore the potential role of verified software in the biomedical software ecosystem. The motivation for organizing FMIE was the recognition that the life sciences and medicine are undergoing a transition from being passive consumers of software and AI/ML technologies to fundamental drivers of new platforms, including those which will need to be mission and safety-critical. Drawing on conversations leading up to and during the workshop, we make five concrete recommendations to help software leaders organically incorporate tools, techniques, and perspectives from formal methods into their project planning and development trajectories.

* 6 pages 

  Access Paper or Ask Questions

Tag Recommendation by Word-Level Tag Sequence Modeling

Nov 30, 2019
Xuewen Shi, Heyan Huang, Shuyang Zhao, Ping Jian, Yi-Kun Tang

In this paper, we transform tag recommendation into a word-based text generation problem and introduce a sequence-to-sequence model. The model inherits the advantages of LSTM-based encoder for sequential modeling and attention-based decoder with local positional encodings for learning relations globally. Experimental results on Zhihu datasets illustrate the proposed model outperforms other state-of-the-art text classification based methods.

* This is a full length version of the paper in DASFAA 2019 

  Access Paper or Ask Questions

Negative Binomial Matrix Factorization for Recommender Systems

Jan 05, 2018
Olivier Gouvert, Thomas Oberlin, Cédric Févotte

We introduce negative binomial matrix factorization (NBMF), a matrix factorization technique specially designed for analyzing over-dispersed count data. It can be viewed as an extension of Poisson matrix factorization (PF) perturbed by a multiplicative term which models exposure. This term brings a degree of freedom for controlling the dispersion, making NBMF more robust to outliers. We show that NBMF allows to skip traditional pre-processing stages, such as binarization, which lead to loss of information. Two estimation approaches are presented: maximum likelihood and variational Bayes inference. We test our model with a recommendation task and show its ability to predict user tastes with better precision than PF.


  Access Paper or Ask Questions

Faithfully Explaining Rankings in a News Recommender System

May 14, 2018
Maartje ter Hoeve, Anne Schuth, Daan Odijk, Maarten de Rijke

There is an increasing demand for algorithms to explain their outcomes. So far, there is no method that explains the rankings produced by a ranking algorithm. To address this gap we propose LISTEN, a LISTwise ExplaiNer, to explain rankings produced by a ranking algorithm. To efficiently use LISTEN in production, we train a neural network to learn the underlying explanation space created by LISTEN; we call this model Q-LISTEN. We show that LISTEN produces faithful explanations and that Q-LISTEN is able to learn these explanations. Moreover, we show that LISTEN is safe to use in a real world environment: users of a news recommendation system do not behave significantly differently when they are exposed to explanations generated by LISTEN instead of manually generated explanations.

* 9 pages, 3 tables, 3 figures, 4 algorithms 

  Access Paper or Ask Questions

<<
240
241
242
243
244
245
246
247
248
249
250
251
252
>>