Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Inductive Relational Matrix Completion

Jul 09, 2020
Qitian Wu, Hengrui Zhang, Hongyuan Zha

Data sparsity and cold-start issues emerge as two major bottlenecks for matrix completion in the context of user-item interaction matrix. We propose a novel method that can fundamentally address these issues. The main idea is to partition users into support users, which have many observed interactions (i.e., non-zero entries in the matrix), and query users, which have few observed entries. For support users, we learn their transductive preference embeddings using matrix factorization over their interactions (a relatively dense sub-matrix). For query users, we devise an inductive relational model that learns to estimate the underlying relations between the two groups of users. This allows us to attentively aggregate the preference embeddings of support users in order to compute inductive embeddings for query users. This new method can address the data sparsity issue by generalizing the behavior patterns of warm-start users to others and thus enables the model to also work effectively for cold-start users with no historical interaction. As theoretical insights, we show that a general version of our model does not sacrifice any expressive power on query users compared with transductive matrix factorization under mild conditions. Also, the generalization error on query users is bounded by the numbers of support users and query users' observed interactions. Moreover, extensive experiments on real-world datasets demonstrate that our model outperforms several state-of-the-art methods by achieving significant improvements on MAE and AUC for warm-start, few-shot (sparsity) and zero-shot (cold-start) recommendation.


  Access Paper or Ask Questions

Multi-Interactive Attention Network for Fine-grained Feature Learning in CTR Prediction

Dec 13, 2020
Kai Zhang, Hao Qian, Qing Cui, Qi Liu, Longfei Li, Jun Zhou, Jianhui Ma, Enhong Chen

In the Click-Through Rate (CTR) prediction scenario, user's sequential behaviors are well utilized to capture the user interest in the recent literature. However, despite being extensively studied, these sequential methods still suffer from three limitations. First, existing methods mostly utilize attention on the behavior of users, which is not always suitable for CTR prediction, because users often click on new products that are irrelevant to any historical behaviors. Second, in the real scenario, there exist numerous users that have operations a long time ago, but turn relatively inactive in recent times. Thus, it is hard to precisely capture user's current preferences through early behaviors. Third, multiple representations of user's historical behaviors in different feature subspaces are largely ignored. To remedy these issues, we propose a Multi-Interactive Attention Network (MIAN) to comprehensively extract the latent relationship among all kinds of fine-grained features (e.g., gender, age and occupation in user-profile). Specifically, MIAN contains a Multi-Interactive Layer (MIL) that integrates three local interaction modules to capture multiple representations of user preference through sequential behaviors and simultaneously utilize the fine-grained user-specific as well as context information. In addition, we design a Global Interaction Module (GIM) to learn the high-order interactions and balance the different impacts of multiple features. Finally, Offline experiment results from three datasets, together with an Online A/B test in a large-scale recommendation system, demonstrate the effectiveness of our proposed approach.

* 9 pages, 6 figures, WSDM2021, accepted 

  Access Paper or Ask Questions

Pair-view Unsupervised Graph Representation Learning

Dec 11, 2020
You Li, Binli Luo, Ning Gui

Low-dimension graph embeddings have proved extremely useful in various downstream tasks in large graphs, e.g., link-related content recommendation and node classification tasks, etc. Most existing embedding approaches take nodes as the basic unit for information aggregation, e.g., node perception fields in GNN or con-textual nodes in random walks. The main drawback raised by such node-view is its lack of support for expressing the compound relationships between nodes, which results in the loss of a certain degree of graph information during embedding. To this end, this paper pro-poses PairE(Pair Embedding), a solution to use "pair", a higher level unit than a "node" as the core for graph embeddings. Accordingly, a multi-self-supervised auto-encoder is designed to fulfill two pretext tasks, to reconstruct the feature distribution for respective pairs and their surrounding context. PairE has three major advantages: 1) Informative, embedding beyond node-view are capable to preserve richer information of the graph; 2) Simple, the solutions provided by PairE are time-saving, storage-efficient, and require the fewer hyper-parameters; 3) High adaptability, with the introduced translator operator to map pair embeddings to the node embeddings, PairE can be effectively used in both the link-based and the node-based graph analysis. Experiment results show that PairE consistently outperforms the state of baselines in all four downstream tasks, especially with significant edges in the link-prediction and multi-label node classification tasks.

* 9 pages, 3 figures and 4 tables 

  Access Paper or Ask Questions

The Semantic Knowledge Graph: A compact, auto-generated model for real-time traversal and ranking of any relationship within a domain

Sep 05, 2016
Trey Grainger, Khalifeh AlJadda, Mohammed Korayem, Andries Smith

This paper describes a new kind of knowledge representation and mining system which we are calling the Semantic Knowledge Graph. At its heart, the Semantic Knowledge Graph leverages an inverted index, along with a complementary uninverted index, to represent nodes (terms) and edges (the documents within intersecting postings lists for multiple terms/nodes). This provides a layer of indirection between each pair of nodes and their corresponding edge, enabling edges to materialize dynamically from underlying corpus statistics. As a result, any combination of nodes can have edges to any other nodes materialize and be scored to reveal latent relationships between the nodes. This provides numerous benefits: the knowledge graph can be built automatically from a real-world corpus of data, new nodes - along with their combined edges - can be instantly materialized from any arbitrary combination of preexisting nodes (using set operations), and a full model of the semantic relationships between all entities within a domain can be represented and dynamically traversed using a highly compact representation of the graph. Such a system has widespread applications in areas as diverse as knowledge modeling and reasoning, natural language processing, anomaly detection, data cleansing, semantic search, analytics, data classification, root cause analysis, and recommendations systems. The main contribution of this paper is the introduction of a novel system - the Semantic Knowledge Graph - which is able to dynamically discover and score interesting relationships between any arbitrary combination of entities (words, phrases, or extracted concepts) through dynamically materializing nodes and edges from a compact graphical representation built automatically from a corpus of data representative of a knowledge domain.

* Accepted for publication in 2016 IEEE 3rd International Conference on Data Science and Advanced Analytics 

  Access Paper or Ask Questions

Labeling-Free Comparison Testing of Deep Learning Models

Apr 08, 2022
Yuejun Guo, Qiang Hu, Maxime Cordy, Xiaofei Xie, Mike Papadakis, Yves Le Traon

Various deep neural networks (DNNs) are developed and reported for their tremendous success in multiple domains. Given a specific task, developers can collect massive DNNs from public sources for efficient reusing and avoid redundant work from scratch. However, testing the performance (e.g., accuracy and robustness) of multiple DNNs and giving a reasonable recommendation that which model should be used is challenging regarding the scarcity of labeled data and demand of domain expertise. Existing testing approaches are mainly selection-based where after sampling, a few of the test data are labeled to discriminate DNNs. Therefore, due to the randomness of sampling, the performance ranking is not deterministic. In this paper, we propose a labeling-free comparison testing approach to overcome the limitations of labeling effort and sampling randomness. The main idea is to learn a Bayesian model to infer the models' specialty only based on predicted labels. To evaluate the effectiveness of our approach, we undertook exhaustive experiments on 9 benchmark datasets spanning in the domains of image, text, and source code, and 165 DNNs. In addition to accuracy, we consider the robustness against synthetic and natural distribution shifts. The experimental results demonstrate that the performance of existing approaches degrades under distribution shifts. Our approach outperforms the baseline methods by up to 0.74 and 0.53 on Spearman's correlation and Kendall's $\tau$, respectively, regardless of the dataset and distribution shift. Additionally, we investigated the impact of model quality (accuracy and robustness) and diversity (standard deviation of the quality) on the testing effectiveness and observe that there is a higher chance of a good result when the quality is over 50\% and the diversity is larger than 18\%.

* 12 pages 

  Access Paper or Ask Questions

CMA-CLIP: Cross-Modality Attention CLIP for Image-Text Classification

Dec 09, 2021
Huidong Liu, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Ning Xie, Chien-Chih Wang, Bryan Wang, Yi Sun

Modern Web systems such as social media and e-commerce contain rich contents expressed in images and text. Leveraging information from multi-modalities can improve the performance of machine learning tasks such as classification and recommendation. In this paper, we propose the Cross-Modality Attention Contrastive Language-Image Pre-training (CMA-CLIP), a new framework which unifies two types of cross-modality attentions, sequence-wise attention and modality-wise attention, to effectively fuse information from image and text pairs. The sequence-wise attention enables the framework to capture the fine-grained relationship between image patches and text tokens, while the modality-wise attention weighs each modality by its relevance to the downstream tasks. In addition, by adding task specific modality-wise attentions and multilayer perceptrons, our proposed framework is capable of performing multi-task classification with multi-modalities. We conduct experiments on a Major Retail Website Product Attribute (MRWPA) dataset and two public datasets, Food101 and Fashion-Gen. The results show that CMA-CLIP outperforms the pre-trained and fine-tuned CLIP by an average of 11.9% in recall at the same level of precision on the MRWPA dataset for multi-task classification. It also surpasses the state-of-the-art method on Fashion-Gen Dataset by 5.5% in accuracy and achieves competitive performance on Food101 Dataset. Through detailed ablation studies, we further demonstrate the effectiveness of both cross-modality attention modules and our method's robustness against noise in image and text inputs, which is a common challenge in practice.

* 9 pages, 2 figures, 6 tables, 1 algorithm 

  Access Paper or Ask Questions

Explainable Graph-based Search for Lessons-Learned Documents in the Semiconductor Industry

May 18, 2021
Hasan Abu-Rasheed, Christian Weber, Johannes Zenkert, Roland Krumm, Madjid Fath

Industrial processes produce a considerable volume of data and thus information. Whether it is structured sensory data or semi- to unstructured textual data, the knowledge that can be derived from it is critical to the sustainable development of the industrial process. A key challenge of this sustainability is the intelligent management of the generated data, as well as the knowledge extracted from it, in order to utilize this knowledge for improving future procedures. This challenge is a result of the tailored documentation methods and domain-specific requirements, which include the need for quick visibility of the documented knowledge. In this paper, we utilize the expert knowledge documented in chip-design failure reports in supporting user access to information that is relevant to a current chip design. Unstructured, free, textual data in previous failure documentations provides a valuable source of lessons-learned, which expert design-engineers have experienced, solved and documented. To achieve a sustainable utilization of knowledge within the company, not only the inherent knowledge has to be mined from unstructured textual data, but also the relations between the lessons-learned, uncovering potentially unknown links. In this research, a knowledge graph is constructed, in order to represent and use the interconnections between reported design failures. A search engine is developed and applied onto the graph to answer queries. In contrast to mere keyword-based searching, the searchability of the knowledge graph offers enhanced search results beyond direct matches and acts as a mean for generating explainable results and result recommendations. Results are provided to the design engineer through an interactive search interface, in which, the feedback from the user is used to further optimize relations for future iterations of the knowledge graph.

* Accepted in the "Computing2021" conference, 15-16 July 2021, London, UK 

  Access Paper or Ask Questions

Social Science Guided Feature Engineering: A Novel Approach to Signed Link Analysis

Jan 04, 2020
Ghazaleh Beigi, Jiliang Tang, Huan Liu

Many real-world relations can be represented by signed networks with positive links (e.g., friendships and trust) and negative links (e.g., foes and distrust). Link prediction helps advance tasks in social network analysis such as recommendation systems. Most existing work on link analysis focuses on unsigned social networks. The existence of negative links piques research interests in investigating whether properties and principles of signed networks differ from those of unsigned networks, and mandates dedicated efforts on link analysis for signed social networks. Recent findings suggest that properties of signed networks substantially differ from those of unsigned networks and negative links can be of significant help in signed link analysis in complementary ways. In this article, we center our discussion on a challenging problem of signed link analysis. Signed link analysis faces the problem of data sparsity, i.e. only a small percentage of signed links are given. This problem can even get worse when negative links are much sparser than positive ones as users are inclined more towards positive disposition rather than negative. We investigate how we can take advantage of other sources of information for signed link analysis. This research is mainly guided by three social science theories, Emotional Information, Diffusion of Innovations, and Individual Personality. Guided by these, we extract three categories of related features and leverage them for signed link analysis. Experiments show the significance of the features gleaned from social theories for signed link prediction and addressing the data sparsity challenge.

* This worked is published at ACM Transactions on Intelligent Systems and Technology(ACM TIST), 2019 

  Access Paper or Ask Questions

RACE: Sub-Linear Memory Sketches for Approximate Near-Neighbor Search on Streaming Data

Feb 18, 2019
Benjamin Coleman, Anshumali Shrivastava, Richard G. Baraniuk

We demonstrate the first possibility of a sub-linear memory sketch for solving the approximate near-neighbor search problem. In particular, we develop an online sketching algorithm that can compress $N$ vectors into a tiny sketch consisting of small arrays of counters whose size scales as $O(N^{b}\log^2{N})$, where $b < 1$ depending on the stability of the near-neighbor search. This sketch is sufficient to identify the top-$v$ near-neighbors with high probability. To the best of our knowledge, this is the first near-neighbor search algorithm that breaks the linear memory ($O(N)$) barrier. We achieve sub-linear memory by combining advances in locality sensitive hashing (LSH) based estimation, especially the recently-published ACE algorithm, with compressed sensing and heavy hitter techniques. We provide strong theoretical guarantees; in particular, our analysis sheds new light on the memory-accuracy tradeoff in the near-neighbor search setting and the role of sparsity in compressed sensing, which could be of independent interest. We rigorously evaluate our framework, which we call RACE (Repeated ACE) data structures on a friend recommendation task on the Google plus graph with more than 100,000 high-dimensional vectors. RACE provides compression that is orders of magnitude better than the random projection based alternative, which is unsurprising given the theoretical advantage. We anticipate that RACE will enable both new theoretical perspectives on near-neighbor search and new methodologies for applications like high-speed data mining, internet-of-things (IoT), and beyond.


  Access Paper or Ask Questions

<<
418
419
420
421
422
423
424
425
426
427
428
429
430
>>