Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Tag Recommendation by Word-Level Tag Sequence Modeling

Nov 30, 2019
Xuewen Shi, Heyan Huang, Shuyang Zhao, Ping Jian, Yi-Kun Tang

In this paper, we transform tag recommendation into a word-based text generation problem and introduce a sequence-to-sequence model. The model inherits the advantages of LSTM-based encoder for sequential modeling and attention-based decoder with local positional encodings for learning relations globally. Experimental results on Zhihu datasets illustrate the proposed model outperforms other state-of-the-art text classification based methods.

* This is a full length version of the paper in DASFAA 2019 

  Access Paper or Ask Questions

Negative Binomial Matrix Factorization for Recommender Systems

Jan 05, 2018
Olivier Gouvert, Thomas Oberlin, Cédric Févotte

We introduce negative binomial matrix factorization (NBMF), a matrix factorization technique specially designed for analyzing over-dispersed count data. It can be viewed as an extension of Poisson matrix factorization (PF) perturbed by a multiplicative term which models exposure. This term brings a degree of freedom for controlling the dispersion, making NBMF more robust to outliers. We show that NBMF allows to skip traditional pre-processing stages, such as binarization, which lead to loss of information. Two estimation approaches are presented: maximum likelihood and variational Bayes inference. We test our model with a recommendation task and show its ability to predict user tastes with better precision than PF.


  Access Paper or Ask Questions

Faithfully Explaining Rankings in a News Recommender System

May 14, 2018
Maartje ter Hoeve, Anne Schuth, Daan Odijk, Maarten de Rijke

There is an increasing demand for algorithms to explain their outcomes. So far, there is no method that explains the rankings produced by a ranking algorithm. To address this gap we propose LISTEN, a LISTwise ExplaiNer, to explain rankings produced by a ranking algorithm. To efficiently use LISTEN in production, we train a neural network to learn the underlying explanation space created by LISTEN; we call this model Q-LISTEN. We show that LISTEN produces faithful explanations and that Q-LISTEN is able to learn these explanations. Moreover, we show that LISTEN is safe to use in a real world environment: users of a news recommendation system do not behave significantly differently when they are exposed to explanations generated by LISTEN instead of manually generated explanations.

* 9 pages, 3 tables, 3 figures, 4 algorithms 

  Access Paper or Ask Questions

Improved Representation Learning for Session-based Recommendation

Jul 04, 2021
Sai Mitheran, Abhinav Java, Surya Kant Sahu, Arshad Shaikh

Session-based recommendation systems suggest relevant items to users by modeling user behavior and preferences using short-term anonymous sessions. Existing methods leverage Graph Neural Networks (GNNs) that propagate and aggregate information from neighboring nodes i.e., local message passing. Such graph-based architectures have representational limits, as a single sub-graph is susceptible to overfit the sequential dependencies instead of accounting for complex transitions between items in different sessions. We propose using a Transformer in combination with a target attentive GNN, which allows richer Representation Learning. Our experimental results and ablation show that our proposed method outperforms the existing methods on real-world benchmark datasets.

* Submitted to AJCAI 2021 

  Access Paper or Ask Questions

Collaborative Filtering Ensemble for Personalized Name Recommendation

Jul 16, 2014
Bernat Coma-Puig, Ernesto Diaz-Aviles, Wolfgang Nejdl

Out of thousands of names to choose from, picking the right one for your child is a daunting task. In this work, our objective is to help parents making an informed decision while choosing a name for their baby. We follow a recommender system approach and combine, in an ensemble, the individual rankings produced by simple collaborative filtering algorithms in order to produce a personalized list of names that meets the individual parents' taste. Our experiments were conducted using real-world data collected from the query logs of 'nameling' (nameling.net), an online portal for searching and exploring names, which corresponds to the dataset released in the context of the ECML PKDD Discover Challenge 2013. Our approach is intuitive, easy to implement, and features fast training and prediction steps.

* Proceedings of the ECML PKDD Discovery Challenge - Recommending Given Names. Co-located with ECML PKDD 2013. Prague, Czech Republic, September 27, 2013 
* Top-N recommendation; personalized ranking; given name recommendation 

  Access Paper or Ask Questions

A Hierarchical Bayesian Model for Size Recommendation in Fashion

Aug 02, 2019
Romain Guigourès, Yuen King Ho, Evgenii Koriagin, Abdul-Saboor Sheikh, Urs Bergmann, Reza Shirvany

We introduce a hierarchical Bayesian approach to tackle the challenging problem of size recommendation in e-commerce fashion. Our approach jointly models a size purchased by a customer, and its possible return event: 1. no return, 2. returned too small 3. returned too big. Those events are drawn following a multinomial distribution parameterized on the joint probability of each event, built following a hierarchy combining priors. Such a model allows us to incorporate extended domain expertise and article characteristics as prior knowledge, which in turn makes it possible for the underlying parameters to emerge thanks to sufficient data. Experiments are presented on real (anonymized) data from millions of customers along with a detailed discussion on the efficiency of such an approach within a large scale production system.

* In: Proceedings of the 12th ACM Conference on Recommender Systems. ACM, 2018. S. 392-396 

  Access Paper or Ask Questions

PIPE: Personalizing Recommendations via Partial Evaluation

Apr 26, 2000
Naren Ramakrishnan

It is shown that personalization of web content can be advantageously viewed as a form of partial evaluation --- a technique well known in the programming languages community. The basic idea is to model a recommendation space as a program, then partially evaluate this program with respect to user preferences (and features) to obtain specialized content. This technique supports both content-based and collaborative approaches, and is applicable to a range of applications that require automatic information integration from multiple web sources. The effectiveness of this methodology is illustrated by two example applications --- (i) personalizing content for visitors to the Blacksburg Electronic Village (http://www.bev.net), and (ii) locating and selecting scientific software on the Internet. The scalability of this technique is demonstrated by its ability to interface with online web ontologies that index thousands of web pages.


  Access Paper or Ask Questions

Cleaned Similarity for Better Memory-Based Recommenders

May 17, 2019
Farhan Khawar, Nevin L. Zhang

Memory-based collaborative filtering methods like user or item k-nearest neighbors (kNN) are a simple yet effective solution to the recommendation problem. The backbone of these methods is the estimation of the empirical similarity between users/items. In this paper, we analyze the spectral properties of the Pearson and the cosine similarity estimators, and we use tools from random matrix theory to argue that they suffer from noise and eigenvalues spreading. We argue that, unlike the Pearson correlation, the cosine similarity naturally possesses the desirable property of eigenvalue shrinkage for large eigenvalues. However, due to its zero-mean assumption, it overestimates the largest eigenvalues. We quantify this overestimation and present a simple re-scaling and noise cleaning scheme. This results in better performance of the memory-based methods compared to their vanilla counterparts.

* To appear in SIGIR 2019 

  Access Paper or Ask Questions

Meta Decision Trees for Explainable Recommendation Systems

Dec 19, 2019
Eyal Shulman, Lior Wolf

We tackle the problem of building explainable recommendation systems that are based on a per-user decision tree, with decision rules that are based on single attribute values. We build the trees by applying learned regression functions to obtain the decision rules as well as the values at the leaf nodes. The regression functions receive as input the embedding of the user's training set, as well as the embedding of the samples that arrive at the current node. The embedding and the regressors are learned end-to-end with a loss that encourages the decision rules to be sparse. By applying our method, we obtain a collaborative filtering solution that provides a direct explanation to every rating it provides. With regards to accuracy, it is competitive with other algorithms. However, as expected, explainability comes at a cost and the accuracy is typically slightly lower than the state of the art result reported in the literature.


  Access Paper or Ask Questions

Revisiting the Performance of iALS on Item Recommendation Benchmarks

Oct 26, 2021
Steffen Rendle, Walid Krichene, Li Zhang, Yehuda Koren

Matrix factorization learned by implicit alternating least squares (iALS) is a popular baseline in recommender system research publications. iALS is known to be one of the most computationally efficient and scalable collaborative filtering methods. However, recent studies suggest that its prediction quality is not competitive with the current state of the art, in particular autoencoders and other item-based collaborative filtering methods. In this work, we revisit the iALS algorithm and present a bag of tricks that we found useful when applying iALS. We revisit four well-studied benchmarks where iALS was reported to perform poorly and show that with proper tuning, iALS is highly competitive and outperforms any method on at least half of the comparisons. We hope that these high quality results together with iALS's known scalability spark new interest in applying and further improving this decade old technique.


  Access Paper or Ask Questions

<<
238
239
240
241
242
243
244
245
246
247
248
249
250
>>