Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Improving Adherence to Heart Failure Management Guidelines via Abductive Reasoning

Jul 16, 2017
Zhuo Chen, Elmer Salazar, Kyle Marple, Gopal Gupta, Lakshman Tamil, Sandeep Das, Alpesh Amin

Management of chronic diseases such as heart failure (HF) is a major public health problem. A standard approach to managing chronic diseases by medical community is to have a committee of experts develop guidelines that all physicians should follow. Due to their complexity, these guidelines are difficult to implement and are adopted slowly by the medical community at large. We have developed a physician advisory system that codes the entire set of clinical practice guidelines for managing HF using answer set programming(ASP). In this paper we show how abductive reasoning can be deployed to find missing symptoms and conditions that the patient must exhibit in order for a treatment prescribed by a physician to work effectively. Thus, if a physician does not make an appropriate recommendation or makes a non-adherent recommendation, our system will advise the physician about symptoms and conditions that must be in effect for that recommendation to apply. It is under consideration for acceptance in TPLP.

* Paper presented at the 33nd International Conference on Logic Programming (ICLP 2017), Melbourne, Australia, August 28 to September 1, 2017 15 pages, LaTeX 

  Access Paper or Ask Questions

Two-sided fairness in rankings via Lorenz dominance

Oct 28, 2021
Virginie Do, Sam Corbett-Davies, Jamal Atif, Nicolas Usunier

We consider the problem of generating rankings that are fair towards both users and item producers in recommender systems. We address both usual recommendation (e.g., of music or movies) and reciprocal recommendation (e.g., dating). Following concepts of distributive justice in welfare economics, our notion of fairness aims at increasing the utility of the worse-off individuals, which we formalize using the criterion of Lorenz efficiency. It guarantees that rankings are Pareto efficient, and that they maximally redistribute utility from better-off to worse-off, at a given level of overall utility. We propose to generate rankings by maximizing concave welfare functions, and develop an efficient inference procedure based on the Frank-Wolfe algorithm. We prove that unlike existing approaches based on fairness constraints, our approach always produces fair rankings. Our experiments also show that it increases the utility of the worse-off at lower costs in terms of overall utility.

* NeurIPS 2021 

  Access Paper or Ask Questions

Beyond Greedy Ranking: Slate Optimization via List-CVAE

May 24, 2018
Ray Jiang, Sven Gowal, Timothy A. Mann, Danilo J. Rezende

The conventional approach to solving the recommendation problem is through greedy ranking by prediction scores for individual document candidates. However these methods fail to optimize the slate as a whole, and often struggle at capturing biases caused by the page layout and interdependencies between documents. The slate recommendation problem aims to find the optimal, ordered subset of documents, a.k.a. slate, given the page layout to serve users recommendations. Solving this problem is hard due to combinatorial explosion of document candidates and their display positions on the page. In this paper, we introduce List Conditional Variational Auto-Encoders (List-CVAE) to learn the joint distribution of documents on the slate conditional on user responses, and directly generate slates. Experiments on simulated and real-world data show that List-CVAE outperforms greedy ranking methods consistently on various scales of documents corpora.

* Preliminary work. Under review by the Neural Information Processing Systems (NIPS) 2018 

  Access Paper or Ask Questions

A sequential Monte Carlo approach to Thompson sampling for Bayesian optimization

May 16, 2017
Hildo Bijl, Thomas B. Schön, Jan-Willem van Wingerden, Michel Verhaegen

Bayesian optimization through Gaussian process regression is an effective method of optimizing an unknown function for which every measurement is expensive. It approximates the objective function and then recommends a new measurement point to try out. This recommendation is usually selected by optimizing a given acquisition function. After a sufficient number of measurements, a recommendation about the maximum is made. However, a key realization is that the maximum of a Gaussian process is not a deterministic point, but a random variable with a distribution of its own. This distribution cannot be calculated analytically. Our main contribution is an algorithm, inspired by sequential Monte Carlo samplers, that approximates this maximum distribution. Subsequently, by taking samples from this distribution, we enable Thompson sampling to be applied to (armed-bandit) optimization problems with a continuous input space. All this is done without requiring the optimization of a nonlinear acquisition function. Experiments have shown that the resulting optimization method has a competitive performance at keeping the cumulative regret limited.


  Access Paper or Ask Questions

On the Relationship Between Explanations, Fairness Perceptions, and Decisions

Apr 29, 2022
Jakob Schoeffer, Maria De-Arteaga, Niklas Kuehl

It is known that recommendations of AI-based systems can be incorrect or unfair. Hence, it is often proposed that a human be the final decision-maker. Prior work has argued that explanations are an essential pathway to help human decision-makers enhance decision quality and mitigate bias, i.e., facilitate human-AI complementarity. For these benefits to materialize, explanations should enable humans to appropriately rely on AI recommendations and override the algorithmic recommendation when necessary to increase distributive fairness of decisions. The literature, however, does not provide conclusive empirical evidence as to whether explanations enable such complementarity in practice. In this work, we (a) provide a conceptual framework to articulate the relationships between explanations, fairness perceptions, reliance, and distributive fairness, (b) apply it to understand (seemingly) contradictory research findings at the intersection of explanations and fairness, and (c) derive cohesive implications for the formulation of research questions and the design of experiments.

* ACM CHI 2022 Workshop on Human-Centered Explainable AI (HCXAI), May 12--13, 2022, New Orleans, LA, USA 

  Access Paper or Ask Questions

A Neural Attention Model for Adaptive Learning of Social Friends' Preferences

Jun 29, 2019
Dimitrios Rafailidis, Gerhard Weiss

Social-based recommendation systems exploit the selections of friends to combat the data sparsity on user preferences, and improve the recommendation accuracy of the collaborative filtering strategy. The main challenge is to capture and weigh friends' preferences, as in practice they do necessarily match. In this paper, we propose a Neural Attention mechanism for Social collaborative filtering, namely NAS. We design a neural architecture, to carefully compute the non-linearity in friends' preferences by taking into account the social latent effects of friends on user behavior. In addition, we introduce a social behavioral attention mechanism to adaptively weigh the influence of friends on user preferences and consequently generate accurate recommendations. Our experiments on publicly available datasets demonstrate the effectiveness of the proposed NAS model over other state-of-the-art methods. Furthermore, we study the effect of the proposed social behavioral attention mechanism and show that it is a key factor to our model's performance.


  Access Paper or Ask Questions

Automatic Machine Learning Derived from Scholarly Big Data

Mar 06, 2020
Asnat Greenstein-Messica, Roman Vainshtein, Gilad Katz, Bracha Shapira, Lior Rokach

One of the challenging aspects of applying machine learning is the need to identify the algorithms that will perform best for a given dataset. This process can be difficult, time consuming and often requires a great deal of domain knowledge. We present Sommelier, an expert system for recommending the machine learning algorithms that should be applied on a previously unseen dataset. Sommelier is based on word embedding representations of the domain knowledge extracted from a large corpus of academic publications. When presented with a new dataset and its problem description, Sommelier leverages a recommendation model trained on the word embedding representation to provide a ranked list of the most relevant algorithms to be used on the dataset. We demonstrate Sommelier's effectiveness by conducting an extensive evaluation on 121 publicly available datasets and 53 classification algorithms. The top algorithms recommended for each dataset by Sommelier were able to achieve on average 97.7% of the optimal accuracy of all surveyed algorithms.


  Access Paper or Ask Questions

Analyzing Customer Feedback for Product Fit Prediction

Aug 28, 2019
Stephan Baier

One of the biggest hurdles for customers when purchasing fashion online, is the difficulty of finding products with the right fit. In order to provide a better online shopping experience, platforms need to find ways to recommend the right product sizes and the best fitting products to their customers. These recommendation systems, however, require customer feedback in order to estimate the most suitable sizing options. Such feedback is rare and often only available as natural text. In this paper, we examine the extraction of product fit feedback from customer reviews using natural language processing techniques. In particular, we compare traditional methods with more recent transfer learning techniques for text classification, and analyze their results. Our evaluation shows, that the transfer learning approach ULMFit is not only comparatively fast to train, but also achieves highest accuracy on this task. The integration of the extracted information with actual size recommendation systems is left for future work.


  Access Paper or Ask Questions

Dynamic Learning with Frequent New Product Launches: A Sequential Multinomial Logit Bandit Problem

Apr 29, 2019
Junyu Cao, Wei Sun

Motivated by the phenomenon that companies introduce new products to keep abreast with customers' rapidly changing tastes, we consider a novel online learning setting where a profit-maximizing seller needs to learn customers' preferences through offering recommendations, which may contain existing products and new products that are launched in the middle of a selling period. We propose a sequential multinomial logit (SMNL) model to characterize customers' behavior when product recommendations are presented in tiers. For the offline version with known customers' preferences, we propose a polynomial-time algorithm and characterize the properties of the optimal tiered product recommendation. For the online problem, we propose a learning algorithm and quantify its regret bound. Moreover, we extend the setting to incorporate a constraint which ensures every new product is learned to a given accuracy. Our results demonstrate the tier structure can be used to mitigate the risks associated with learning new products.


  Access Paper or Ask Questions

TrQuery: An Embedding-based Framework for Recommanding SPARQL Queries

Jun 16, 2018
Lijing Zhang, Xiaowang Zhang, Zhiyong Feng

In this paper, we present an embedding-based framework (TrQuery) for recommending solutions of a SPARQL query, including approximate solutions when exact querying solutions are not available due to incompleteness or inconsistencies of real-world RDF data. Within this framework, embedding is applied to score solutions together with edit distance so that we could obtain more fine-grained recommendations than those recommendations via edit distance. For instance, graphs of two querying solutions with a similar structure can be distinguished in our proposed framework while the edit distance depending on structural difference becomes unable. To this end, we propose a novel score model built on vector space generated in embedding system to compute the similarity between an approximate subgraph matching and a whole graph matching. Finally, we evaluate our approach on large RDF datasets DBpedia and YAGO, and experimental results show that TrQuery exhibits an excellent behavior in terms of both effectiveness and efficiency.

* 17 pages 

  Access Paper or Ask Questions

<<
190
191
192
193
194
195
196
197
198
199
200
201
202
>>