Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Specialized Document Embeddings for Aspect-based Similarity of Research Papers

Mar 28, 2022
Malte Ostendorff, Till Blume, Terry Ruas, Bela Gipp, Georg Rehm

Document embeddings and similarity measures underpin content-based recommender systems, whereby a document is commonly represented as a single generic embedding. However, similarity computed on single vector representations provides only one perspective on document similarity that ignores which aspects make two documents alike. To address this limitation, aspect-based similarity measures have been developed using document segmentation or pairwise multi-class document classification. While segmentation harms the document coherence, the pairwise classification approach scales poorly to large scale corpora. In this paper, we treat aspect-based similarity as a classical vector similarity problem in aspect-specific embedding spaces. We represent a document not as a single generic embedding but as multiple specialized embeddings. Our approach avoids document segmentation and scales linearly w.r.t.the corpus size. In an empirical study, we use the Papers with Code corpus containing 157,606 research papers and consider the task, method, and dataset of the respective research papers as their aspects. We compare and analyze three generic document embeddings, six specialized document embeddings and a pairwise classification baseline in the context of research paper recommendations. As generic document embeddings, we consider FastText, SciBERT, and SPECTER. To compute the specialized document embeddings, we compare three alternative methods inspired by retrofitting, fine-tuning, and Siamese networks. In our experiments, Siamese SciBERT achieved the highest scores. Additional analyses indicate an implicit bias of the generic document embeddings towards the dataset aspect and against the method aspect of each research paper. Our approach of aspect-based document embeddings mitigates potential risks arising from implicit biases by making them explicit.

* Accepted for publication at JCDL 2022 

  Access Paper or Ask Questions

FES: A Fast Efficient Scalable QoS Prediction Framework

Mar 16, 2021
Soumi Chattopadhyay, Chandranath Adak, Ranjana Roy Chowdhury

Quality-of-Service prediction of web service is an integral part of services computing due to its diverse applications in the various facets of a service life cycle, such as service composition, service selection, service recommendation. One of the primary objectives of designing a QoS prediction algorithm is to achieve satisfactory prediction accuracy. However, accuracy is not the only criteria to meet while developing a QoS prediction algorithm. The algorithm has to be faster in terms of prediction time so that it can be integrated into a real-time recommendation or composition system. The other important factor to consider while designing the prediction algorithm is scalability to ensure that the prediction algorithm can tackle large-scale datasets. The existing algorithms on QoS prediction often compromise on one goal while ensuring the others. In this paper, we propose a semi-offline QoS prediction model to achieve three important goals simultaneously: higher accuracy, faster prediction time, scalability. Here, we aim to predict the QoS value of service that varies across users. Our framework consists of multi-phase prediction algorithms: preprocessing-phase prediction, online prediction, and prediction using the pre-trained model. In the preprocessing phase, we first apply multi-level clustering on the dataset to obtain correlated users and services. We then preprocess the clusters using collaborative filtering to remove the sparsity of the given QoS invocation log matrix. Finally, we create a two-staged, semi-offline regression model using neural networks to predict the QoS value of service to be invoked by a user in real-time. Our experimental results on four publicly available WS-DREAM datasets show the efficiency in terms of accuracy, scalability, fast responsiveness of our framework as compared to the state-of-the-art methods.

* 13 pages, 15 figures 

  Access Paper or Ask Questions

Deep Text Mining of Instagram Data Without Strong Supervision

Sep 24, 2019
Kim Hammar, Shatha Jaradat, Nima Dokoohaki, Mihhail Matskin

With the advent of social media, our online feeds increasingly consist of short, informal, and unstructured text. This textual data can be analyzed for the purpose of improving user recommendations and detecting trends. Instagram is one of the largest social media platforms, containing both text and images. However, most of the prior research on text processing in social media is focused on analyzing Twitter data, and little attention has been paid to text mining of Instagram data. Moreover, many text mining methods rely on annotated training data, which in practice is both difficult and expensive to obtain. In this paper, we present methods for unsupervised mining of fashion attributes from Instagram text, which can enable a new kind of user recommendation in the fashion domain. In this context, we analyze a corpora of Instagram posts from the fashion domain, introduce a system for extracting fashion attributes from Instagram, and train a deep clothing classifier with weak supervision to classify Instagram posts based on the associated text. With our experiments, we confirm that word embeddings are a useful asset for information extraction. Experimental results show that information extraction using word embeddings outperforms a baseline that uses Levenshtein distance. The results also show the benefit of combining weak supervision signals using generative models instead of majority voting. Using weak supervision and generative modeling, an F1 score of 0.61 is achieved on the task of classifying the image contents of Instagram posts based solely on the associated text, which is on level with human performance. Finally, our empirical study provides one of the few available studies on Instagram text and shows that the text is noisy, that the text distribution exhibits the long-tail phenomenon, and that comment sections on Instagram are multi-lingual.

* 8 pages, 5 figures. Pre-print for paper to appear in conference proceedings for the Web Intelligence Conference 

  Access Paper or Ask Questions

ScoreNet: Learning Non-Uniform Attention and Augmentation for Transformer-Based Histopathological Image Classification

Mar 14, 2022
Thomas Stegmüller, Behzad Bozorgtabar, Antoine Spahr, Jean-Philippe Thiran

Progress in digital pathology is hindered by high-resolution images and the prohibitive cost of exhaustive localized annotations. The commonly used paradigm to categorize pathology images is patch-based processing, which often incorporates multiple instance learning (MIL) to aggregate local patch-level representations yielding image-level prediction. Nonetheless, diagnostically relevant regions may only take a small fraction of the whole tissue, and current MIL-based approaches often process images uniformly, discarding the inter-patches interactions. To alleviate these issues, we propose ScoreNet, a new efficient transformer that exploits a differentiable recommendation stage to extract discriminative image regions and dedicate computational resources accordingly. The proposed transformer leverages the local and global attention of a few dynamically recommended high-resolution regions at an efficient computational cost. We further introduce a novel Mixup-based data-augmentation, namely ScoreMix, by leveraging the image's semantic distribution to guide the data mixing and produce coherent sample-label pairs. ScoreMix is embarrassingly simple and mitigates the pitfalls of previous augmentations, which assume a uniform semantic distribution and risk mislabeling the samples. Thorough experiments and ablation studies on three breast cancer histology datasets of Haematoxylin & Eosin (H&E) have validated the superiority of our approach over prior arts, including transformer-based models on tumour regions-of-interest (TRoIs) classification. ScoreNet equipped with proposed ScoreMix augmentation demonstrates better generalization capabilities and achieves new state-of-the-art (SOTA) results with only 50% of the data compared to other Mixup augmentation variants. Finally, ScoreNet yields high efficacy and outperforms SOTA efficient transformers, namely TransPath and SwinTransformer.

* 30 pages, 7 figures 

  Access Paper or Ask Questions

Extreme Regression for Dynamic Search Advertising

Jan 20, 2020
Yashoteja Prabhu, Aditya Kusupati, Nilesh Gupta, Manik Varma

This paper introduces a new learning paradigm called eXtreme Regression (XR) whose objective is to accurately predict the numerical degrees of relevance of an extremely large number of labels to a data point. XR can provide elegant solutions to many large-scale ranking and recommendation applications including Dynamic Search Advertising (DSA). XR can learn more accurate models than the recently popular extreme classifiers which incorrectly assume strictly binary-valued label relevances. Traditional regression metrics which sum the errors over all the labels are unsuitable for XR problems since they could give extremely loose bounds for the label ranking quality. Also, the existing regression algorithms won't efficiently scale to millions of labels. This paper addresses these limitations through: (1) new evaluation metrics for XR which sum only the k largest regression errors; (2) a new algorithm called XReg which decomposes XR task into a hierarchy of much smaller regression problems thus leading to highly efficient training and prediction. This paper also introduces a (3) new labelwise prediction algorithm in XReg useful for DSA and other recommendation tasks. Experiments on benchmark datasets demonstrated that XReg can outperform the state-of-the-art extreme classifiers as well as large-scale regressors and rankers by up to 50% reduction in the new XR error metric, and up to 2% and 2.4% improvements in terms of the propensity-scored precision metric used in extreme classification and the click-through rate metric used in DSA respectively. Deployment of XReg on DSA in Bing resulted in a relative gain of 27% in query coverage. XReg's source code can be downloaded from http://manikvarma.org/code/XReg/download.html.

* 15 pages, 4 figures, published at WSDM 2020 as a Long Oral 

  Access Paper or Ask Questions

On Tie Strength Augmented Social Correlation for Inferring Preference of Mobile Telco Users

Dec 09, 2016
Shifeng Liu, Zheng Hu, Sujit Dey, Xin Ke

For mobile telecom operators, it is critical to build preference profiles of their customers and connected users, which can help operators make better marketing strategies, and provide more personalized services. With the deployment of deep packet inspection (DPI) in telecom networks, it is possible for the telco operators to obtain user online preference. However, DPI has its limitations and user preference derived only from DPI faces sparsity and cold start problems. To better infer the user preference, social correlation in telco users network derived from Call Detailed Records (CDRs) with regard to online preference is investigated. Though widely verified in several online social networks, social correlation between online preference of users in mobile telco networks, where the CDRs derived relationship are of less social properties and user mobile internet surfing activities are not visible to neighbourhood, has not been explored at a large scale. Based on a real world telecom dataset including CDRs and preference of more than $550K$ users for several months, we verified that correlation does exist between online preference in such \textit{ambiguous} social network. Furthermore, we found that the stronger ties that users build, the more similarity between their preference may have. After defining the preference inferring task as a Top-$K$ recommendation problem, we incorporated Matrix Factorization Collaborative Filtering model with social correlation and tie strength based on call patterns to generate Top-$K$ preferred categories for users. The proposed Tie Strength Augmented Social Recommendation (TSASoRec) model takes data sparsity and cold start user problems into account, considering both the recorded and missing recorded category entries. The experiment on real dataset shows the proposed model can better infer user preference, especially for cold start users.

* This paper has been modified and the writing may make reader confused 

  Access Paper or Ask Questions

Deep learning-based electroencephalography analysis: a systematic review

Jan 20, 2019
Yannick Roy, Hubert Banville, Isabela Albuquerque, Alexandre Gramfort, Tiago H. Falk, Jocelyn Faubert

Electroencephalography (EEG) is a complex signal and can require several years of training to be correctly interpreted. Recently, deep learning (DL) has shown great promise in helping make sense of EEG signals due to its capacity to learn good feature representations from raw data. Whether DL truly presents advantages as compared to more traditional EEG processing approaches, however, remains an open question. In this work, we review 156 papers that apply DL to EEG, published between January 2010 and July 2018, and spanning different application domains such as epilepsy, sleep, brain-computer interfacing, and cognitive and affective monitoring. We extract trends and highlight interesting approaches in order to inform future research and formulate recommendations. Various data items were extracted for each study pertaining to 1) the data, 2) the preprocessing methodology, 3) the DL design choices, 4) the results, and 5) the reproducibility of the experiments. Our analysis reveals that the amount of EEG data used across studies varies from less than ten minutes to thousands of hours. As for the model, 40% of the studies used convolutional neural networks (CNNs), while 14% used recurrent neural networks (RNNs), most often with a total of 3 to 10 layers. Moreover, almost one-half of the studies trained their models on raw or preprocessed EEG time series. Finally, the median gain in accuracy of DL approaches over traditional baselines was 5.4% across all relevant studies. More importantly, however, we noticed studies often suffer from poor reproducibility: a majority of papers would be hard or impossible to reproduce given the unavailability of their data and code. To help the field progress, we provide a list of recommendations for future studies and we make our summary table of DL and EEG papers available and invite the community to contribute.


  Access Paper or Ask Questions

COVI-AgentSim: an Agent-based Model for Evaluating Methods of Digital Contact Tracing

Oct 30, 2020
Prateek Gupta, Tegan Maharaj, Martin Weiss, Nasim Rahaman, Hannah Alsdurf, Abhinav Sharma, Nanor Minoyan, Soren Harnois-Leblanc, Victor Schmidt, Pierre-Luc St. Charles, Tristan Deleu, Andrew Williams, Akshay Patel, Meng Qu, Olexa Bilaniuk, Gaétan Marceau Caron, Pierre Luc Carrier, Satya Ortiz-Gagné, Marc-Andre Rousseau, David Buckeridge, Joumana Ghosn, Yang Zhang, Bernhard Schölkopf, Jian Tang, Irina Rish, Christopher Pal, Joanna Merckx, Eilif B. Muller, Yoshua Bengio

The rapid global spread of COVID-19 has led to an unprecedented demand for effective methods to mitigate the spread of the disease, and various digital contact tracing (DCT) methods have emerged as a component of the solution. In order to make informed public health choices, there is a need for tools which allow evaluation and comparison of DCT methods. We introduce an agent-based compartmental simulator we call COVI-AgentSim, integrating detailed consideration of virology, disease progression, social contact networks, and mobility patterns, based on parameters derived from empirical research. We verify by comparing to real data that COVI-AgentSim is able to reproduce realistic COVID-19 spread dynamics, and perform a sensitivity analysis to verify that the relative performance of contact tracing methods are consistent across a range of settings. We use COVI-AgentSim to perform cost-benefit analyses comparing no DCT to: 1) standard binary contact tracing (BCT) that assigns binary recommendations based on binary test results; and 2) a rule-based method for feature-based contact tracing (FCT) that assigns a graded level of recommendation based on diverse individual features. We find all DCT methods consistently reduce the spread of the disease, and that the advantage of FCT over BCT is maintained over a wide range of adoption rates. Feature-based methods of contact tracing avert more disability-adjusted life years (DALYs) per socioeconomic cost (measured by productive hours lost). Our results suggest any DCT method can help save lives, support re-opening of economies, and prevent second-wave outbreaks, and that FCT methods are a promising direction for enriching BCT using self-reported symptoms, yielding earlier warning signals and a significantly reduced spread of the virus per socioeconomic cost.


  Access Paper or Ask Questions

Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical Encoder for Document Matching

Apr 26, 2020
Liu Yang, Mingyang Zhang, Cheng Li, Michael Bendersky, Marc Najork

Many information retrieval and natural language processing problems can be formalized as a semantic matching task. However, the existing work in this area has been focused in large part on the matching between short texts like finding answer spans, sentences and passages given a query or a natural language question. Semantic matching between long-form texts like documents, which can be applied to applications such as document clustering, news recommendation and related article recommendation, is relatively less explored and needs more research effort. In recent years, self-attention based models like Transformers and BERT have achieved state-of-the-art performance in several natural language understanding tasks. These kinds of models, however, are still restricted to short text sequences like sentences due to the quadratic computational time and space complexity of self-attention with respect to the input sequence length. In this paper, we address these issues by proposing the Siamese Multi-depth Transformer-based Hierarchical (SMITH) Encoder for document representation learning and matching, which contains several novel design choices to adapt self-attention models for long text inputs. For model pre-training, we propose the masked sentence block language modeling task in addition to the original masked word language modeling task used in BERT, to capture sentence block relations within a document. The experimental results on several benchmark data sets for long-form document matching show that our proposed SMITH model outperforms the previous state-of-the-art Siamese matching models including hierarchical attention, multi-depth attention-based hierarchical recurrent neural network, and BERT for long-form document matching, and increases the maximum input text length from 512 to 2048 when compared with BERT-based baseline methods.

* arXiv preprint version. 10 pages 

  Access Paper or Ask Questions

<<
237
238
239
240
241
242
243
244
245
246
247
248
249
>>