Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Overview of LiLAS 2021 -- Living Labs for Academic Search

Mar 10, 2022
Philipp Schaer, Timo Breuer, Leyla Jael Castro, Benjamin Wolff, Johann Schaible, Narges Tavakolpoursaleh

The Living Labs for Academic Search (LiLAS) lab aims to strengthen the concept of user-centric living labs for academic search. The methodological gap between real-world and lab-based evaluation should be bridged by allowing lab participants to evaluate their retrieval approaches in two real-world academic search systems from life sciences and social sciences. This overview paper outlines the two academic search systems LIVIVO and GESIS Search, and their corresponding tasks within LiLAS, which are ad-hoc retrieval and dataset recommendation. The lab is based on a new evaluation infrastructure named STELLA that allows participants to submit results corresponding to their experimental systems in the form of pre-computed runs and Docker containers that can be integrated into production systems and generate experimental results in real-time. Both submission types are interleaved with the results provided by the productive systems allowing for a seamless presentation and evaluation. The evaluation of results and a meta-analysis of the different tasks and submission types complement this overview.

* CLEF 2021 

  Access Paper or Ask Questions

Branching Reinforcement Learning

Feb 16, 2022
Yihan Du, Wei Chen

In this paper, we propose a novel Branching Reinforcement Learning (Branching RL) model, and investigate both Regret Minimization (RM) and Reward-Free Exploration (RFE) metrics for this model. Unlike standard RL where the trajectory of each episode is a single $H$-step path, branching RL allows an agent to take multiple base actions in a state such that transitions branch out to multiple successor states correspondingly, and thus it generates a tree-structured trajectory. This model finds important applications in hierarchical recommendation systems and online advertising. For branching RL, we establish new Bellman equations and key lemmas, i.e., branching value difference lemma and branching law of total variance, and also bound the total variance by only $O(H^2)$ under an exponentially-large trajectory. For RM and RFE metrics, we propose computationally efficient algorithms BranchVI and BranchRFE, respectively, and derive nearly matching upper and lower bounds. Our results are only polynomial in problem parameters despite exponentially-large trajectories.


  Access Paper or Ask Questions

Learning to Represent Human Motives for Goal-directed Web Browsing

Aug 07, 2021
Jyun-Yu Jiang, Chia-Jung Lee, Longqi Yang, Bahareh Sarrafzadeh, Brent Hecht, Jaime Teevan

Motives or goals are recognized in psychology literature as the most fundamental drive that explains and predicts why people do what they do, including when they browse the web. Although providing enormous value, these higher-ordered goals are often unobserved, and little is known about how to leverage such goals to assist people's browsing activities. This paper proposes to take a new approach to address this problem, which is fulfilled through a novel neural framework, Goal-directed Web Browsing (GoWeB). We adopt a psychologically-sound taxonomy of higher-ordered goals and learn to build their representations in a structure-preserving manner. Then we incorporate the resulting representations for enhancing the experiences of common activities people perform on the web. Experiments on large-scale data from Microsoft Edge web browser show that GoWeB significantly outperforms competitive baselines for in-session web page recommendation, re-visitation classification, and goal-based web page grouping. A follow-up analysis further characterizes how the variety of human motives can affect the difference observed in human behavioral patterns.

* Accepted by RecSys 2021 

  Access Paper or Ask Questions

Expert System Gradient Descent Style Training: Development of a Defensible Artificial Intelligence Technique

Mar 07, 2021
Jeremy Straub

Artificial intelligence systems, which are designed with a capability to learn from the data presented to them, are used throughout society. These systems are used to screen loan applicants, make sentencing recommendations for criminal defendants, scan social media posts for disallowed content and more. Because these systems don't assign meaning to their complex learned correlation network, they can learn associations that don't equate to causality, resulting in non-optimal and indefensible decisions being made. In addition to making decisions that are sub-optimal, these systems may create legal liability for their designers and operators by learning correlations that violate anti-discrimination and other laws regarding what factors can be used in different types of decision making. This paper presents the use of a machine learning expert system, which is developed with meaning-assigned nodes (facts) and correlations (rules). Multiple potential implementations are considered and evaluated under different conditions, including different network error and augmentation levels and different training levels. The performance of these systems is compared to random and fully connected networks.


  Access Paper or Ask Questions

When is Early Classification of Time Series Meaningful?

Feb 23, 2021
Renjie Wu, Audrey Der, Eamonn J. Keogh

Since its introduction two decades ago, there has been increasing interest in the problem of early classification of time series. This problem generalizes classic time series classification to ask if we can classify a time series subsequence with sufficient accuracy and confidence after seeing only some prefix of a target pattern. The idea is that the earlier classification would allow us to take immediate action, in a domain in which some practical interventions are possible. For example, that intervention might be sounding an alarm or applying the brakes in an automobile. In this work, we make a surprising claim. In spite of the fact that there are dozens of papers on early classification of time series, it is not clear that any of them could ever work in a real-world setting. The problem is not with the algorithms per se but with the vague and underspecified problem description. Essentially all algorithms make implicit and unwarranted assumptions about the problem that will ensure that they will be plagued by false positives and false negatives even if their results suggested that they could obtain near-perfect results. We will explain our findings with novel insights and experiments and offer recommendations to the community.


  Access Paper or Ask Questions

Less is More: Pre-training a Strong Siamese Encoder Using a Weak Decoder

Feb 18, 2021
Shuqi Lu, Chenyan Xiong, Di He, Guolin Ke, Waleed Malik, Zhicheng Dou, Paul Bennett, Tieyan Liu, Arnold Overwijk

Many real-world applications use Siamese networks to efficiently match text sequences at scale, which require high-quality sequence encodings. This paper pre-trains language models dedicated to sequence matching in Siamese architectures. We first hypothesize that a representation is better for sequence matching if the entire sequence can be reconstructed from it, which, however, is unlikely to be achieved in standard autoencoders: A strong decoder can rely on its capacity and natural language patterns to reconstruct and bypass the needs of better sequence encodings. Therefore we propose a new self-learning method that pretrains the encoder with a weak decoder, which reconstructs the original sequence from the encoder's [CLS] representations but is restricted in both capacity and attention span. In our experiments on web search and recommendation, the pre-trained SEED-Encoder, "SiamEsE oriented encoder by reconstructing from weak decoder", shows significantly better generalization ability when fine-tuned in Siamese networks, improving overall accuracy and few-shot performances. Our code and models will be released.


  Access Paper or Ask Questions

A Neural Question Answering System for Basic Questions about Subroutines

Jan 11, 2021
Aakash Bansal, Zachary Eberhart, Lingfei Wu, Collin McMillan

A question answering (QA) system is a type of conversational AI that generates natural language answers to questions posed by human users. QA systems often form the backbone of interactive dialogue systems, and have been studied extensively for a wide variety of tasks ranging from restaurant recommendations to medical diagnostics. Dramatic progress has been made in recent years, especially from the use of encoder-decoder neural architectures trained with big data input. In this paper, we take initial steps to bringing state-of-the-art neural QA technologies to Software Engineering applications by designing a context-based QA system for basic questions about subroutines. We curate a training dataset of 10.9 million question/context/answer tuples based on rules we extract from recent empirical studies. Then, we train a custom neural QA model with this dataset and evaluate the model in a study with professional programmers. We demonstrate the strengths and weaknesses of the system, and lay the groundwork for its use in eventual dialogue systems for software engineering.

* 12 pages, 5 figures . To be published at the proceeding of IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER) 2021, Honolulu, Hawaii, USA 

  Access Paper or Ask Questions

Free-rider Attacks on Model Aggregation in Federated Learning

Jun 21, 2020
Yann Fraboni, Richard Vidal, Marco Lorenzi

Free-rider attacks on federated learning consist in dissimulating participation to the federated learning process with the goal of obtaining the final aggregated model without actually contributing with any data. We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation, such as FedAvg or FedProx, and provide formal guarantees for these attacks to converge to the aggregated models of the fair participants. We first show that a straightforward implementation of this attack can be simply achieved by not updating the local parameters during the iterative federated optimization. As this attack can be detected by adopting simple countermeasures at the server level, we subsequently study more complex disguising schemes based on stochastic updates of the free-rider parameters. We demonstrate the proposed strategies on a number of experimental scenarios, in both iid and non-iid settings. We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning, especially in sensitive domains where security of data and models is critical.


  Access Paper or Ask Questions

Predicting A Creator's Preferences In, and From, Interactive Generative Art

Mar 03, 2020
Devi Parikh

As a lay user creates an art piece using an interactive generative art tool, what, if anything, do the choices they make tell us about them and their preferences? These preferences could be in the specific generative art form (e.g., color palettes, density of the piece, thickness or curvatures of any lines in the piece); predicting them could lead to a smarter interactive tool. Or they could be preferences in other walks of life (e.g., music, fashion, food, interior design, paintings) or attributes of the person (e.g., personality type, gender, artistic inclinations); predicting them could lead to improved personalized recommendations for products or experiences. To study this research question, we collect preferences from 311 subjects, both in a specific generative art form and in other walks of life. We analyze the preferences and train machine learning models to predict a subset of preferences from the remaining. We find that preferences in the generative art form we studied cannot predict preferences in other walks of life better than chance (and vice versa). However, preferences within the generative art form are reliably predictive of each other.


  Access Paper or Ask Questions

Identifying Mislabeled Instances in Classification Datasets

Dec 11, 2019
Nicolas Michael Müller, Karla Markert

A key requirement for supervised machine learning is labeled training data, which is created by annotating unlabeled data with the appropriate class. Because this process can in many cases not be done by machines, labeling needs to be performed by human domain experts. This process tends to be expensive both in time and money, and is prone to errors. Additionally, reviewing an entire labeled dataset manually is often prohibitively costly, so many real world datasets contain mislabeled instances. To address this issue, we present in this paper a non-parametric end-to-end pipeline to find mislabeled instances in numerical, image and natural language datasets. We evaluate our system quantitatively by adding a small number of label noise to 29 datasets, and show that we find mislabeled instances with an average precision of more than 0.84 when reviewing our system's top 1\% recommendation. We then apply our system to publicly available datasets and find mislabeled instances in CIFAR-100, Fashion-MNIST, and others. Finally, we publish the code and an applicable implementation of our approach.

* 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 2019 

  Access Paper or Ask Questions

<<
329
330
331
332
333
334
335
336
337
338
339
340
341
>>