Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

User-friendly Comparison of Similarity Algorithms on Wikidata

Aug 11, 2021
Filip Ilievski, Pedro Szekely, Gleb Satyukov, Amandeep Singh

While the similarity between two concept words has been evaluated and studied for decades, much less attention has been devoted to algorithms that can compute the similarity of nodes in very large knowledge graphs, like Wikidata. To facilitate investigations and head-to-head comparisons of similarity algorithms on Wikidata, we present a user-friendly interface that allows flexible computation of similarity between Qnodes in Wikidata. At present, the similarity interface supports four algorithms, based on: graph embeddings (TransE, ComplEx), text embeddings (BERT), and class-based similarity. We demonstrate the behavior of the algorithms on representative examples about semantically similar, related, and entirely unrelated entity pairs. To support anticipated applications that require efficient similarity computations, like entity linking and recommendation, we also provide a REST API that can compute most similar neighbors for any Qnode in Wikidata.


  Access Paper or Ask Questions

A Pragmatic Look at Deep Imitation Learning

Aug 04, 2021
Kai Arulkumaran, Dan Ogawa Lillrank

The introduction of the generative adversarial imitation learning (GAIL) algorithm has spurred the development of scalable imitation learning approaches using deep neural networks. The GAIL objective can be thought of as 1) matching the expert policy's state distribution; 2) penalising the learned policy's state distribution; and 3) maximising entropy. While theoretically motivated, in practice GAIL can be difficult to apply, not least due to the instabilities of adversarial training. In this paper, we take a pragmatic look at GAIL and related imitation learning algorithms. We implement and automatically tune a range of algorithms in a unified experimental setup, presenting a fair evaluation between the competing methods. From our results, our primary recommendation is to consider non-adversarial methods. Furthermore, we discuss the common components of imitation learning objectives, and present promising avenues for future research.


  Access Paper or Ask Questions

QoS Prediction for 5G Connected and Automated Driving

Jul 11, 2021
Apostolos Kousaridas, Ramya Panthangi Manjunath, Jose Mauricio Perdomo, Chan Zhou, Ernst Zielinski, Steffen Schmitz, Andreas Pfadler

5G communication system can support the demanding quality-of-service (QoS) requirements of many advanced vehicle-to-everything (V2X) use cases. However, the safe and efficient driving, especially of automated vehicles, may be affected by sudden changes of the provided QoS. For that reason, the prediction of the QoS changes and the early notification of these predicted changes to the vehicles have been recently enabled by 5G communication systems. This solution enables the vehicles to avoid or mitigate the effect of sudden QoS changes at the application level. This article describes how QoS prediction could be generated by a 5G communication system and delivered to a V2X application. The tele-operated driving use case is used as an example to analyze the feasibility of a QoS prediction scheme. Useful recommendations for the development of a QoS prediction solution are provided, while open research topics are identified.

* 7 pages, 5 figures, accepted for publication in the IEEE Communications Magazine 

  Access Paper or Ask Questions

What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study

Jun 10, 2020
Marcin Andrychowicz, Anton Raichuk, Piotr Stańczyk, Manu Orsini, Sertan Girgin, Raphael Marinier, Léonard Hussenot, Matthieu Geist, Olivier Pietquin, Marcin Michalski, Sylvain Gelly, Olivier Bachem

In recent years, on-policy reinforcement learning (RL) has been successfully applied to many different continuous control tasks. While RL algorithms are often conceptually simple, their state-of-the-art implementations take numerous low- and high-level design decisions that strongly affect the performance of the resulting agents. Those choices are usually not extensively discussed in the literature, leading to discrepancy between published descriptions of algorithms and their implementations. This makes it hard to attribute progress in RL and slows down overall progress [Engstrom'20]. As a step towards filling that gap, we implement >50 such ``choices'' in a unified on-policy RL framework, allowing us to investigate their impact in a large-scale empirical study. We train over 250'000 agents in five continuous control environments of different complexity and provide insights and practical recommendations for on-policy training of RL agents.


  Access Paper or Ask Questions

A Dataset Schema for Cooperative Learning from Demonstration in Multi-robots Systems

Dec 03, 2019
Marco A. C. Simões, Robson Marinho da Silva, Tatiane Nogueira

Multi-Agent Systems (MASs) have been used to solve complex problems that demand intelligent agents working together to reach the desired goals. These Agents should effectively synchronize their individual behaviors so that they can act as a team in a coordinated manner to achieve the common goal of the whole system. One of the main issues in MASs is the agents' coordination, being common domain experts observing MASs execution disapprove agents' decisions. Even if the MAS was designed using the best methods and tools for agents' coordination, this difference of decisions between experts and MAS is confirmed. Therefore, this paper proposes a new dataset schema to support learning the coordinated behavior in MASs from demonstration. The results of the proposed solution are validated in a Multi-Robot System (MRS) organizing a collection of new cooperative plans recommendations from the demonstration by domain experts.

* This is a pre-print of an article published in the Journal of Intelligent & Robotic Systems. The final authenticated version will be available online at: https://doi. org/10.1007/s10846-019-01123-w 

  Access Paper or Ask Questions

Constructing Ontology-Based Cancer Treatment Decision Support System with Case-Based Reasoning

Dec 05, 2018
Ying Shen, Joël Colloc, Armelle Jacquet-Andrieu, Ziyi Guo, Yong Liu

Decision support is a probabilistic and quantitative method designed for modeling problems in situations with ambiguity. Computer technology can be employed to provide clinical decision support and treatment recommendations. The problem of natural language applications is that they lack formality and the interpretation is not consistent. Conversely, ontologies can capture the intended meaning and specify modeling primitives. Disease Ontology (DO) that pertains to cancer's clinical stages and their corresponding information components is utilized to improve the reasoning ability of a decision support system (DSS). The proposed DSS uses Case-Based Reasoning (CBR) to consider disease manifestations and provides physicians with treatment solutions from similar previous cases for reference. The proposed DSS supports natural language processing (NLP) queries. The DSS obtained 84.63% accuracy in disease classification with the help of the ontology.

* International Conference on Smart Computing and Communication SmartCom 2017: Smart Computing and Communication pp 278-288 

  Access Paper or Ask Questions

An integrated recurrent neural network and regression model with spatial and climatic couplings for vector-borne disease dynamics

Jan 23, 2022
Zhijian Li, Jack Xin, Guofa Zhou

We developed an integrated recurrent neural network and nonlinear regression spatio-temporal model for vector-borne disease evolution. We take into account climate data and seasonality as external factors that correlate with disease transmitting insects (e.g. flies), also spill-over infections from neighboring regions surrounding a region of interest. The climate data is encoded to the model through a quadratic embedding scheme motivated by recommendation systems. The neighboring regions' influence is modeled by a long short-term memory neural network. The integrated model is trained by stochastic gradient descent and tested on leish-maniasis data in Sri Lanka from 2013-2018 where infection outbreaks occurred. Our model outperformed ARIMA models across a number of regions with high infections, and an associated ablation study renders support to our modeling hypothesis and ideas.


  Access Paper or Ask Questions

Analyzing the Machine Learning Conference Review Process

Nov 26, 2020
David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Mainstream machine learning conferences have seen a dramatic increase in the number of participants, along with a growing range of perspectives, in recent years. Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias. In this work, we critically analyze the review process through a comprehensive study of papers submitted to ICLR between 2017 and 2020. We quantify reproducibility/randomness in review scores and acceptance decisions, and examine whether scores correlate with paper impact. Our findings suggest strong institutional bias in accept/reject decisions, even after controlling for paper quality. Furthermore, we find evidence for a gender gap, with female authors receiving lower scores, lower acceptance rates, and fewer citations per paper than their male counterparts. We conclude our work with recommendations for future conference organizers.

* NeurIPS Workshop on Navigating the Broader Impacts of AI Research. Full version at arXiv:2010.05137 

  Access Paper or Ask Questions

An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process

Oct 26, 2020
David Tran, Alex Valtchanov, Keshav Ganapathy, Raymond Feng, Eric Slud, Micah Goldblum, Tom Goldstein

Mainstream machine learning conferences have seen a dramatic increase in the number of participants, along with a growing range of perspectives, in recent years. Members of the machine learning community are likely to overhear allegations ranging from randomness of acceptance decisions to institutional bias. In this work, we critically analyze the review process through a comprehensive study of papers submitted to ICLR between 2017 and 2020. We quantify reproducibility/randomness in review scores and acceptance decisions, and examine whether scores correlate with paper impact. Our findings suggest strong institutional bias in accept/reject decisions, even after controlling for paper quality. Furthermore, we find evidence for a gender gap, with female authors receiving lower scores, lower acceptance rates, and fewer citations per paper than their male counterparts. We conclude our work with recommendations for future conference organizers.

* 19 pages, 6 Figures 

  Access Paper or Ask Questions

<<
294
295
296
297
298
299
300
301
302
303
304
305
306
>>