Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Memory Augmented Multi-Instance Contrastive Predictive Coding for Sequential Recommendation

Sep 13, 2021
Ruihong Qiu, Zi Huang, Hongzhi Yin

The sequential recommendation aims to recommend items, such as products, songs and places, to users based on the sequential patterns of their historical records. Most existing sequential recommender models consider the next item prediction task as the training signal. Unfortunately, there are two essential challenges for these methods: (1) the long-term preference is difficult to capture, and (2) the supervision signal is too sparse to effectively train a model. In this paper, we propose a novel sequential recommendation framework to overcome these challenges based on a memory augmented multi-instance contrastive predictive coding scheme, denoted as MMInfoRec. The basic contrastive predictive coding (CPC) serves as encoders of sequences and items. The memory module is designed to augment the auto-regressive prediction in CPC to enable a flexible and general representation of the encoded preference, which can improve the ability to capture the long-term preference. For effective training of the MMInfoRec model, a novel multi-instance noise contrastive estimation (MINCE) loss is proposed, using multiple positive samples, which offers effective exploitation of samples inside a mini-batch. The proposed MMInfoRec framework falls into the contrastive learning style, within which, however, a further finetuning step is not required given that its contrastive training task is well aligned with the target recommendation task. With extensive experiments on four benchmark datasets, MMInfoRec can outperform the state-of-the-art baselines.


  Access Paper or Ask Questions

Graph Convolutional Neural Networks for Web-Scale Recommender Systems

Jun 06, 2018
Rex Ying, Ruining He, Kaifeng Chen, Pong Eksombatchai, William L. Hamilton, Jure Leskovec

Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains a challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm PinSage, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. We deploy PinSage at Pinterest and train it on 7.5 billion examples on a graph with 3 billion nodes representing pins and boards, and 18 billion edges. According to offline metrics, user studies and A/B tests, PinSage generates higher-quality recommendations than comparable deep learning and graph-based alternatives. To our knowledge, this is the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.

* KDD 2018 

  Access Paper or Ask Questions

A Long-Short Demands-Aware Model for Next-Item Recommendation

Feb 12, 2019
Ting Bai, Pan Du, Wayne Xin Zhao, Ji-Rong Wen, Jian-Yun Nie

Recommending the right products is the central problem in recommender systems, but the right products should also be recommended at the right time to meet the demands of users, so as to maximize their values. Users' demands, implying strong purchase intents, can be the most useful way to promote products sales if well utilized. Previous recommendation models mainly focused on user's general interests to find the right products. However, the aspect of meeting users' demands at the right time has been much less explored. To address this problem, we propose a novel Long-Short Demands-aware Model (LSDM), in which both user's interests towards items and user's demands over time are incorporated. We summarize two aspects: termed as long-time demands (e.g., purchasing the same product repetitively showing a long-time persistent interest) and short-time demands (e.g., co-purchase like buying paintbrushes after pigments). To utilize such long-short demands of users, we create different clusters to group the successive product purchases together according to different time spans, and use recurrent neural networks to model each sequence of clusters at a time scale. The long-short purchase demands with multi-time scales are finally aggregated by joint learning strategies. Experimental results on three real-world commerce datasets demonstrate the effectiveness of our model for next-item recommendation, showing the usefulness of modeling users' long-short purchase demands of items with multi-time scales.


  Access Paper or Ask Questions

Understanding Capacity-Driven Scale-Out Neural Recommendation Inference

Nov 11, 2020
Michael Lui, Yavuz Yetim, Özgür Özkan, Zhuoran Zhao, Shin-Yeh Tsai, Carole-Jean Wu, Mark Hempstead

Deep learning recommendation models have grown to the terabyte scale. Traditional serving schemes--that load entire models to a single server--are unable to support this scale. One approach to support this scale is with distributed serving, or distributed inference, which divides the memory requirements of a single large model across multiple servers. This work is a first-step for the systems research community to develop novel model-serving solutions, given the huge system design space. Large-scale deep recommender systems are a novel workload and vital to study, as they consume up to 79% of all inference cycles in the data center. To that end, this work describes and characterizes scale-out deep learning recommendation inference using data-center serving infrastructure. This work specifically explores latency-bounded inference systems, compared to the throughput-oriented training systems of other recent works. We find that the latency and compute overheads of distributed inference are largely a result of a model's static embedding table distribution and sparsity of input inference requests. We further evaluate three embedding table mapping strategies of three DLRM-like models and specify challenging design trade-offs in terms of end-to-end latency, compute overhead, and resource efficiency. Overall, we observe only a marginal latency overhead when the data-center scale recommendation models are served with the distributed inference manner--P99 latency is increased by only 1% in the best case configuration. The latency overheads are largely a result of the commodity infrastructure used and the sparsity of embedding tables. Even more encouragingly, we also show how distributed inference can account for efficiency improvements in data-center scale recommendation serving.

* 16 pages + references, 16 Figures. Additive revision to clarify distinction between this work and other DLRM-like models and add Acknowledgments 

  Access Paper or Ask Questions

Learning Dual Dynamic Representations on Time-Sliced User-Item Interaction Graphs for Sequential Recommendation

Sep 24, 2021
Zeyuan Chen, Wei Zhang, Junchi Yan, Gang Wang, Jianyong Wang

Sequential Recommendation aims to recommend items that a target user will interact with in the near future based on the historically interacted items. While modeling temporal dynamics is crucial for sequential recommendation, most of the existing studies concentrate solely on the user side while overlooking the sequential patterns existing in the counterpart, i.e., the item side. Although a few studies investigate the dynamics involved in the dual sides, the complex user-item interactions are not fully exploited from a global perspective to derive dynamic user and item representations. In this paper, we devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe). To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice and exploits time-sliced graph neural networks to learn user and item representations. Moreover, to enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices based on temporal point process. Comprehensive experiments on three public real-world datasets demonstrate DRL-SRe outperforms the state-of-the-art sequential recommendation models with a large margin.

* 11 pages, accepted by CIKM'21 

  Access Paper or Ask Questions

D-HAN: Dynamic News Recommendation with Hierarchical Attention Network

Dec 19, 2021
Qinghua Zhao, Xu Chen, Hui Zhang, Shuai Ma

News recommendation is an effective information dissemination solution in modern society. While recent years have witnessed many promising news recommendation models, they mostly capture the user-news interactions on the document-level in a static manner. However, in real-world scenarios, the news can be quite complex and diverse, blindly squeezing all the contents into an embedding vector can be less effective in extracting information compatible with the personalized preference of the users. In addition, user preferences in the news recommendation scenario can be highly dynamic, and a tailored dynamic mechanism should be designed for better recommendation performance. In this paper, we propose a novel dynamic news recommender model. For better understanding the news content, we leverage the attention mechanism to represent the news from the sentence-, element- and document-levels, respectively. For capturing users' dynamic preferences, the continuous time information is seamlessly incorporated into the computing of the attention weights. More specifically, we design a hierarchical attention network, where the lower layer learns the importance of different sentences and elements, and the upper layer captures the correlations between the previously interacted and the target news. To comprehensively model the dynamic characters, we firstly enhance the traditional attention mechanism by incorporating both absolute and relative time information, and then we propose a dynamic negative sampling method to optimize the users' implicit feedback. We conduct extensive experiments based on three real-world datasets to demonstrate our model's effectiveness. Our source code and pre-trained representations are available at https://github.com/lshowway/D-HAN.


  Access Paper or Ask Questions

Modeling Personalized Item Frequency Information for Next-basket Recommendation

May 31, 2020
Haoji Hu, Xiangnan He, Jinyang Gao, Zhi-Li Zhang

Next-basket recommendation (NBR) is prevalent in e-commerce and retail industry. In this scenario, a user purchases a set of items (a basket) at a time. NBR performs sequential modeling and recommendation based on a sequence of baskets. NBR is in general more complex than the widely studied sequential (session-based) recommendation which recommends the next item based on a sequence of items. Recurrent neural network (RNN) has proved to be very effective for sequential modeling and thus been adapted for NBR. However, we argue that existing RNNs cannot directly capture item frequency information in the recommendation scenario. Through careful analysis of real-world datasets, we find that {\em personalized item frequency} (PIF) information (which records the number of times that each item is purchased by a user) provides two critical signals for NBR. But, this has been largely ignored by existing methods. Even though existing methods such as RNN based methods have strong representation ability, our empirical results show that they fail to learn and capture PIF. As a result, existing methods cannot fully exploit the critical signals contained in PIF. Given this inherent limitation of RNNs, we propose a simple item frequency based k-nearest neighbors (kNN) method to directly utilize these critical signals. We evaluate our method on four public real-world datasets. Despite its relative simplicity, our method frequently outperforms the state-of-the-art NBR methods -- including deep learning based methods using RNNs -- when patterns associated with PIF play an important role in the data.


  Access Paper or Ask Questions

Private and Utility Enhanced Recommendations with Local Differential Privacy and Gaussian Mixture Model

Mar 06, 2021
Jeyamohan Neera, Xiaomin Chen, Nauman Aslam, Kezhi Wang, Zhan Shu

Recommendation systems rely heavily on users behavioural and preferential data (e.g. ratings, likes) to produce accurate recommendations. However, users experience privacy concerns due to unethical data aggregation and analytical practices carried out by the Service Providers (SP). Local differential privacy (LDP) based perturbation mechanisms add noise to users data at user side before sending it to the SP. The SP then uses the perturbed data to perform recommendations. Although LDP protects the privacy of users from SP, it causes a substantial decline in predictive accuracy. To address this issue, we propose an LDP-based Matrix Factorization (MF) with a Gaussian Mixture Model (MoG). The LDP perturbation mechanism, Bounded Laplace (BLP), regulates the effect of noise by confining the perturbed ratings to a predetermined domain. We derive a sufficient condition of the scale parameter for BLP to satisfy $\epsilon$ LDP. At the SP, The MoG model estimates the noise added to perturbed ratings and the MF algorithm predicts missing ratings. Our proposed LDP based recommendation system improves the recommendation accuracy without violating LDP principles. The empirical evaluations carried out on three real world datasets, i.e., Movielens, Libimseti and Jester, demonstrate that our method offers a substantial increase in predictive accuracy under strong privacy guarantee.

* 12 pages 

  Access Paper or Ask Questions

<<
70
71
72
73
74
75
76
77
78
79
80
81
82
>>