Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Neural Collaborative Reasoning

Jun 11, 2020
Hanxiong Chen, Shaoyun Shi, Yunqi Li, Yongfeng Zhang

Collaborative Filtering (CF) has been an important approach to recommender systems. However, existing CF methods are mostly designed based on the idea of matching, i.e., by learning user and item embeddings from data using shallow or deep models, they try to capture the relevance patterns in data, so that a user embedding can be matched with appropriate item embeddings using designed or learned similarity functions. We argue that as a cognition rather than a perception intelligent task, recommendation requires not only the ability of pattern recognition and matching from data, but also the ability of logical reasoning in the data. Inspired by recent progress on neural-symbolic machine learning, we propose a neural collaborative reasoning framework to integrate the power of embedding learning and logical reasoning, where the embeddings capture similarity patterns in data from perceptual perspectives, and the logic facilitates cognitive reasoning for informed decision making. An important challenge, however, is to bridge differentiable neural networks and symbolic reasoning in a shared architecture for optimization and inference. To solve the problem, we propose a Modularized Logical Neural Network architecture, which learns basic logical operations such as AND, OR, and NOT as neural modules based on logical regularizer, and learns logic variables as vector embeddings. In this way, each logic expression can be equivalently organized as a neural network, so that logical reasoning and prediction can be conducted in a continuous space. Experiments on several real-world datasets verified the advantages of our framework compared with both traditional shallow and deep models.

* 10 pages, 5 figures 

  Access Paper or Ask Questions

Understanding Public Opinion on Using Hydroxychloroquine for COVID-19 Treatment via Social Media

Jan 01, 2022
Thuy T. Do, Du Nguyen, Anh Le, Anh Nguyen, Dong Nguyen, Nga Hoang, Uyen Le, Tuan Tran

Hydroxychloroquine (HCQ) is used to prevent or treat malaria caused by mosquito bites. Recently, the drug has been suggested to treat COVID-19, but that has not been supported by scientific evidence. The information regarding the drug efficacy has flooded social networks, posting potential threats to the community by perverting their perceptions of the drug efficacy. This paper studies the reactions of social network users on the recommendation of using HCQ for COVID-19 treatment by analyzing the reaction patterns and sentiment of the tweets. We collected 164,016 tweets from February to December 2020 and used a text mining approach to identify social reaction patterns and opinion change over time. Our descriptive analysis identified an irregularity of the users' reaction patterns associated tightly with the social and news feeds on the development of HCQ and COVID-19 treatment. The study linked the tweets and Google search frequencies to reveal the viewpoints of local communities on the use of HCQ for COVID-19 treatment across different states. Further, our tweet sentiment analysis reveals that public opinion changed significantly over time regarding the recommendation of using HCQ for COVID-19 treatment. The data showed that high support in the early dates but it significantly declined in October. Finally, using the manual classification of 4,850 tweets by humans as our benchmark, our sentiment analysis showed that the Google Cloud Natural Language algorithm outperformed the Valence Aware Dictionary and sEntiment Reasoner in classifying tweets, especially in the sarcastic tweet group.

* This paper will be presented at HEALTHINF 2022 - Conference 

  Access Paper or Ask Questions

Latent Relational Metric Learning via Memory-based Attention for Collaborative Ranking

Feb 13, 2018
Yi Tay, Anh Tuan Luu, Siu Cheung Hui

This paper proposes a new neural architecture for collaborative ranking with implicit feedback. Our model, LRML (\textit{Latent Relational Metric Learning}) is a novel metric learning approach for recommendation. More specifically, instead of simple push-pull mechanisms between user and item pairs, we propose to learn latent relations that describe each user item interaction. This helps to alleviate the potential geometric inflexibility of existing metric learing approaches. This enables not only better performance but also a greater extent of modeling capability, allowing our model to scale to a larger number of interactions. In order to do so, we employ a augmented memory module and learn to attend over these memory blocks to construct latent relations. The memory-based attention module is controlled by the user-item interaction, making the learned relation vector specific to each user-item pair. Hence, this can be interpreted as learning an exclusive and optimal relational translation for each user-item interaction. The proposed architecture demonstrates the state-of-the-art performance across multiple recommendation benchmarks. LRML outperforms other metric learning models by $6\%-7.5\%$ in terms of [email protected] and [email protected] on large datasets such as Netflix and MovieLens20M. Moreover, qualitative studies also demonstrate evidence that our proposed model is able to infer and encode explicit sentiment, temporal and attribute information despite being only trained on implicit feedback. As such, this ascertains the ability of LRML to uncover hidden relational structure within implicit datasets.

* WWW 2018 

  Access Paper or Ask Questions

Towards Amortized Ranking-Critical Training for Collaborative Filtering

Jun 10, 2019
Sam Lobel, Chunyuan Li, Jianfeng Gao, Lawrence Carin

Collaborative filtering is widely used in modern recommender systems. Recent research shows that variational autoencoders (VAEs) yield state-of-the-art performance by integrating flexible representations from deep neural networks into latent variable models, mitigating limitations of traditional linear factor models. VAEs are typically trained by maximizing the likelihood (MLE) of users interacting with ground-truth items. While simple and often effective, MLE-based training does not directly maximize the recommendation-quality metrics one typically cares about, such as top-N ranking. In this paper we investigate new methods for training collaborative filtering models based on actor-critic reinforcement learning, to directly optimize the non-differentiable quality metrics of interest. Specifically, we train a critic network to approximate ranking-based metrics, and then update the actor network (represented here by a VAE) to directly optimize against the learned metrics. In contrast to traditional learning-to-rank methods that require to re-run the optimization procedure for new lists, our critic-based method amortizes the scoring process with a neural network, and can directly provide the (approximate) ranking scores for new lists. Empirically, we show that the proposed methods outperform several state-of-the-art baselines, including recently-proposed deep learning approaches, on three large-scale real-world datasets. The code to reproduce the experimental results and figure plots is on Github: https://github.com/samlobel/RaCT_CF

* The first two authors contributed equally to this manuscript. Code: https://github.com/samlobel/RaCT_CF 

  Access Paper or Ask Questions

apk2vec: Semi-supervised multi-view representation learning for profiling Android applications

Sep 15, 2018
Annamalai Narayanan, Charlie Soh, Lihui Chen, Yang Liu, Lipo Wang

Building behavior profiles of Android applications (apps) with holistic, rich and multi-view information (e.g., incorporating several semantic views of an app such as API sequences, system calls, etc.) would help catering downstream analytics tasks such as app categorization, recommendation and malware analysis significantly better. Towards this goal, we design a semi-supervised Representation Learning (RL) framework named apk2vec to automatically generate a compact representation (aka profile/embedding) for a given app. More specifically, apk2vec has the three following unique characteristics which make it an excellent choice for largescale app profiling: (1) it encompasses information from multiple semantic views such as API sequences, permissions, etc., (2) being a semi-supervised embedding technique, it can make use of labels associated with apps (e.g., malware family or app category labels) to build high quality app profiles, and (3) it combines RL and feature hashing which allows it to efficiently build profiles of apps that stream over time (i.e., online learning). The resulting semi-supervised multi-view hash embeddings of apps could then be used for a wide variety of downstream tasks such as the ones mentioned above. Our extensive evaluations with more than 42,000 apps demonstrate that apk2vec's app profiles could significantly outperform state-of-the-art techniques in four app analytics tasks namely, malware detection, familial clustering, app clone detection and app recommendation.

* International Conference on Data Mining, 2018 

  Access Paper or Ask Questions

Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance

Jun 30, 2020
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, Daniel S. Weld

Increasingly, organizations are pairing humans with AI systems to improve decision-making and reducing costs. Proponents of human-centered AI argue that team performance can even further improve when the AI model explains its recommendations. However, a careful analysis of existing literature reveals that prior studies observed improvements due to explanations only when the AI, alone, outperformed both the human and the best human-AI team. This raises an important question: can explanations lead to complementary performance, i.e., with accuracy higher than both the human and the AI working alone? We address this question by devising comprehensive studies on human-AI teaming, where participants solve a task with help from an AI system without explanations and from one with varying types of AI explanation support. We carefully controlled to ensure comparable human and AI accuracy across experiments on three NLP datasets (two for sentiment analysis and one for question answering). While we found complementary improvements from AI augmentation, they were not increased by state-of-the-art explanations compared to simpler strategies, such as displaying the AI's confidence. We show that explanations increase the chance that humans will accept the AI's recommendation regardless of whether the AI is correct. While this clarifies the gains in team performance from explanations in prior work, it poses new challenges for human-centered AI: how can we best design systems to produce complementary performance? Can we develop explanatory approaches that help humans decide whether and when to trust AI input?

* Draft/pre-print 

  Access Paper or Ask Questions

Memory-Augmented Neural Networks for Predictive Process Analytics

Feb 03, 2018
Asjad Khan, Hung Le, Kien Do, Truyen Tran, Aditya Ghose, Hoa Dam, Renuka Sindhgatta

Process analytics involves a sophisticated layer of data analytics built over the traditional notion of process mining. The flexible execution of business process instances involves multiple critical decisions including what task to perform next and what resources to allocate to a task. In this paper, we explore the application of deep learning techniques for solving various process analytics related problems. Based on recent advances in the field we specifically look at memory-augmented neural networks (MANN)s and adapt the latest model to date, namely the Differential Neural Computer. We introduce two modifications to account for a variety of tasks in predictive process analytics: (i) separating the encoding phase and decoding phase, resulting dual controllers, one for each phase; (ii) implementing a write-protected policy for the memory during the decoding phase. We demonstrate the feasibility and usefulness of our approach by solving a number of common process analytics tasks such as next activity prediction, time to completion and suffix prediction. We also introduce the notion of MANN based process analytics recommendation machinery that once deployed can serve as an effective business process recommendation engine enabling organizations to answer various prescriptive process analytics related questions.Using real-world datasets, we benchmark our results against those obtained from the state-of-art methods. We show that MANNs based process analytics methods can acheive state-of-the-art performance and have a lot of value to offer for enterprise specific process anlaytics applications.


  Access Paper or Ask Questions

Exploiting Social Tags for Cross-Domain Collaborative Filtering

Dec 24, 2013
Yue Shi, Martha Larson, Alan Hanjalic

One of the most challenging problems in recommender systems based on the collaborative filtering (CF) concept is data sparseness, i.e., limited user preference data is available for making recommendations. Cross-domain collaborative filtering (CDCF) has been studied as an effective mechanism to alleviate data sparseness of one domain using the knowledge about user preferences from other domains. A key question to be answered in the context of CDCF is what common characteristics can be deployed to link different domains for effective knowledge transfer. In this paper, we assess the usefulness of user-contributed (social) tags in this respect. We do so by means of the Generalized Tag-induced Cross-domain Collaborative Filtering (GTagCDCF) approach that we propose in this paper and that we developed based on the general collective matrix factorization framework. Assessment is done by a series of experiments, using publicly available CF datasets that represent three cross-domain cases, i.e., two two-domain cases and one three-domain case. A comparative analysis on two-domain cases involving GTagCDCF and several state-of-the-art CDCF approaches indicates the increased benefit of using social tags as representatives of explicit links between domains for CDCF as compared to the implicit links deployed by the existing CDCF methods. In addition, we show that users from different domains can already benefit from GTagCDCF if they only share a few common tags. Finally, we use the three-domain case to validate the robustness of GTagCDCF with respect to the scale of datasets and the varying number of domains.

* Manuscript under review 

  Access Paper or Ask Questions

Fairness in Ranking under Uncertainty

Jul 14, 2021
Ashudeep Singh, David Kempe, Thorsten Joachims

Fairness has emerged as an important consideration in algorithmic decision-making. Unfairness occurs when an agent with higher merit obtains a worse outcome than an agent with lower merit. Our central point is that a primary cause of unfairness is uncertainty. A principal or algorithm making decisions never has access to the agents' true merit, and instead uses proxy features that only imperfectly predict merit (e.g., GPA, star ratings, recommendation letters). None of these ever fully capture an agent's merit; yet existing approaches have mostly been defining fairness notions directly based on observed features and outcomes. Our primary point is that it is more principled to acknowledge and model the uncertainty explicitly. The role of observed features is to give rise to a posterior distribution of the agents' merits. We use this viewpoint to define a notion of approximate fairness in ranking. We call an algorithm $\phi$-fair (for $\phi \in [0,1]$) if it has the following property for all agents $x$ and all $k$: if agent $x$ is among the top $k$ agents with respect to merit with probability at least $\rho$ (according to the posterior merit distribution), then the algorithm places the agent among the top $k$ agents in its ranking with probability at least $\phi \rho$. We show how to compute rankings that optimally trade off approximate fairness against utility to the principal. In addition to the theoretical characterization, we present an empirical analysis of the potential impact of the approach in simulation studies. For real-world validation, we applied the approach in the context of a paper recommendation system that we built and fielded at a large conference.

* Preprint under submission. 19 pages, 4 figures 

  Access Paper or Ask Questions

<<
234
235
236
237
238
239
240
241
242
243
244
245
246
>>