Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Seq2Seq and Joint Learning Based Unix Command Line Prediction System

Jun 20, 2020
Thoudam Doren Singh, Abdullah Faiz Ur Rahman Khilji, Divyansha, Apoorva Vikram Singh, Surmila Thokchom, Sivaji Bandyopadhyay

Despite being an open-source operating system pioneered in the early 90s, UNIX based platforms have not been able to garner an overwhelming reception from amateur end users. One of the rationales for under popularity of UNIX based systems is the steep learning curve corresponding to them due to extensive use of command line interface instead of usual interactive graphical user interface. In past years, the majority of insights used to explore the concern are eminently centered around the notion of utilizing chronic log history of the user to make the prediction of successive command. The approaches directed at anatomization of this notion are predominantly in accordance with Probabilistic inference models. The techniques employed in past, however, have not been competent enough to address the predicament as legitimately as anticipated. Instead of deploying usual mechanism of recommendation systems, we have employed a simple yet novel approach of Seq2seq model by leveraging continuous representations of self-curated exhaustive Knowledge Base (KB) to enhance the embedding employed in the model. This work describes an assistive, adaptive and dynamic way of enhancing UNIX command line prediction systems. Experimental methods state that our model has achieved accuracy surpassing mixture of other techniques and adaptive command line interface mechanism as acclaimed in the past.

* 9 pages, 1 Figure 

  Access Paper or Ask Questions

From Federated Learning to Fog Learning: Towards Large-Scale Distributed Machine Learning in Heterogeneous Wireless Networks

Jun 07, 2020
Seyyedali Hosseinalipour, Christopher G. Brinton, Vaneet Aggarwal, Huaiyu Dai, Mung Chiang

Contemporary network architectures are pushing computing tasks from the cloud towards the network edge, leveraging the increased processing capabilities of edge devices to meet rising user demands. Of particular importance are machine learning (ML) tasks, which are becoming ubiquitous in networked applications ranging from content recommendation systems to intelligent vehicular communications. Federated learning has emerged recently as a technique for training ML models by leveraging processing capabilities across the nodes that collect the data. There are several challenges with employing federated learning at the edge, however, due to the significant heterogeneity in compute and communication capabilities that exist across devices. To address this, we advocate a new learning paradigm called {fog learning which will intelligently distribute ML model training across the fog, the continuum of nodes from edge devices to cloud servers. Fog learning is inherently a multi-stage learning framework that breaks down the aggregations of heterogeneous local models across several layers and can leverage data offloading within each layer. Its hybrid learning paradigm transforms star network topologies used for parameter transfers in federated learning to more distributed topologies. We also discuss several open research directions for fog learning.

* 7 pages, 4 figures 

  Access Paper or Ask Questions

Initial Design Strategies and their Effects on Sequential Model-Based Optimization

Mar 30, 2020
Jakob Bossek, Carola Doerr, Pascal Kerschke

Sequential model-based optimization (SMBO) approaches are algorithms for solving problems that require computationally or otherwise expensive function evaluations. The key design principle of SMBO is a substitution of the true objective function by a surrogate, which is used to propose the point(s) to be evaluated next. SMBO algorithms are intrinsically modular, leaving the user with many important design choices. Significant research efforts go into understanding which settings perform best for which type of problems. Most works, however, focus on the choice of the model, the acquisition function, and the strategy used to optimize the latter. The choice of the initial sampling strategy, however, receives much less attention. Not surprisingly, quite diverging recommendations can be found in the literature. We analyze in this work how the size and the distribution of the initial sample influences the overall quality of the efficient global optimization~(EGO) algorithm, a well-known SMBO approach. While, overall, small initial budgets using Halton sampling seem preferable, we also observe that the performance landscape is rather unstructured. We furthermore identify several situations in which EGO performs unfavorably against random sampling. Both observations indicate that an adaptive SMBO design could be beneficial, making SMBO an interesting test-bed for automated algorithm design.

* To appear in Proc. of ACM Genetic and Evolutionary Computation Conference (GECCO'20) 

  Access Paper or Ask Questions

Leveraging Cross Feedback of User and Item Embeddings for Variational Autoencoder based Collaborative Filtering

Feb 21, 2020
Yuan Jin, He Zhao, Ming Liu, Lan Du, Yunfeng Li, Ruohua Xu, Longxiang Gao

Matrix factorization (MF) has been widely applied to collaborative filtering in recommendation systems. Its Bayesian variants can derive posterior distributions of user and item embeddings, and are more robust to sparse ratings. However, the Bayesian methods are restricted by their update rules for the posterior parameters due to the conjugacy of the priors and the likelihood. Neural networks can potentially address this issue by capturing complex mappings between the posterior parameters and the data. In this paper, we propose a variational auto-encoder based Bayesian MF framework. It leverages not only the data but also the information from the embeddings to approximate their joint posterior distribution. The approximation is an iterative procedure with cross feedback of user and item embeddings to the others' encoders. More specifically, user embeddings sampled in the previous iteration, alongside their ratings, are fed back into the item-side encoders to compute the posterior parameters for the item embeddings in the current iteration, and vice versa. The decoder network then reconstructs the data using the MF with the currently re-sampled user and item embeddings. We show the effectiveness of our framework in terms of reconstruction errors across five real-world datasets. We also perform ablation studies to illustrate the importance of the cross feedback component of our framework in lowering the reconstruction errors and accelerating the convergence.


  Access Paper or Ask Questions

Optimal best arm selection for general distributions

Aug 24, 2019
Shubhada Agrawal, Sandeep Juneja, Peter Glynn

Given a finite set of unknown distributions $\textit{or arms}$ that can be sampled from, we consider the problem of identifying the one with the largest mean using a delta-correct algorithm (an adaptive, sequential algorithm that restricts the probability of error to a specified delta) that has minimum sample complexity. Lower bounds for delta-correct algorithms are well known. Further, delta-correct algorithms that match the lower bound asymptotically as delta reduces to zero have also been developed in literature when the arm distributions are restricted to a single parameter exponential family. In this paper, we first observe a negative result that some restrictions are essential as otherwise under a delta-correct algorithm, distributions with unbounded support would require an infinite number of samples in expectation. We then propose a delta-correct algorithm that matches the lower bound as delta reduces to zero under a mild restriction that a known bound on the expectation of a non-negative, increasing convex function (for example, the squared moment) of underlying random variables, exists. We also propose batch processing and identify optimal batch sizes to substantially speed up the proposed algorithm. This best arm selection problem is a well studied classic problem in the simulation community. It has many learning applications including in recommendation systems and in product selection.

* 34 pages 

  Access Paper or Ask Questions

A multiple criteria methodology for prioritizing and selecting portfolios of urban projects

Dec 13, 2018
Maria Barbati, Josè Rui Figueira, Salvatore Greco, Alessio Ishizaka, Simona Panaro

This paper presents an integrated methodology supporting decisions in urban planning. In particular, it deals with the prioritization and the selection of a portfolio of projects related to buildings of some values for the cultural heritage in cities. In particular, our methodology has been validated to the historical center of Naples, Italy. Each project is assessed on the basis of a set of both quantitative and qualitative criteria with the purpose to determine their level of priority for further selection. This step was performed through the application of the Electre Tri-nC method. This method is a multiple criteria outranking based model for ordinal classification (or sorting) problems and allows to assign a priority level to each project as an analytical `recommendation' tool. A set of resources (namely budgetary constraints) as well as some logical constraints related to urban policy requirements have to be taken into consideration together with the priority of projects in a portfolio analysis model permitting to identify the efficient portfolios and to support the selection of the most adequate set of projects to activate. The process has been conducted thanks to the interaction between analysts, municipality representative and experts. The proposed methodology is generic enough to be applied in other territorial or urban planning problems. More precisely, given the increasing interest of historical cities to restore their cultural heritage the integrated multiple criteria decision aiding analytical tool proposed in this paper has an important potential of being used in the future.


  Access Paper or Ask Questions

AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

Nov 01, 2018
Bettina Berendt

Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.

* to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-2018 

  Access Paper or Ask Questions

Log-based Evaluation of Label Splits for Process Models

Jun 23, 2016
Niek Tax, Natalia Sidorova, Reinder Haakma, Wil M. P. van der Aalst

Process mining techniques aim to extract insights in processes from event logs. One of the challenges in process mining is identifying interesting and meaningful event labels that contribute to a better understanding of the process. Our application area is mining data from smart homes for elderly, where the ultimate goal is to signal deviations from usual behavior and provide timely recommendations in order to extend the period of independent living. Extracting individual process models showing user behavior is an important instrument in achieving this goal. However, the interpretation of sensor data at an appropriate abstraction level is not straightforward. For example, a motion sensor in a bedroom can be triggered by tossing and turning in bed or by getting up. We try to derive the actual activity depending on the context (time, previous events, etc.). In this paper we introduce the notion of label refinements, which links more abstract event descriptions with their more refined counterparts. We present a statistical evaluation method to determine the usefulness of a label refinement for a given event log from a process perspective. Based on data from smart homes, we show how our statistical evaluation method for label refinements can be used in practice. Our method was able to select two label refinements out of a set of candidate label refinements that both had a positive effect on model precision.

* Procedia Computer Science, 96 (2016) 63-72 
* Paper accepted at the 20th International Conference on Knowledge-Based and Intelligent Information & Engineering Systems, to appear in Procedia Computer Science 

  Access Paper or Ask Questions

Evaluation of Explore-Exploit Policies in Multi-result Ranking Systems

Apr 28, 2015
Dragomir Yankov, Pavel Berkhin, Lihong Li

We analyze the problem of using Explore-Exploit techniques to improve precision in multi-result ranking systems such as web search, query autocompletion and news recommendation. Adopting an exploration policy directly online, without understanding its impact on the production system, may have unwanted consequences - the system may sustain large losses, create user dissatisfaction, or collect exploration data which does not help improve ranking quality. An offline framework is thus necessary to let us decide what policy and how we should apply in a production environment to ensure positive outcome. Here, we describe such an offline framework. Using the framework, we study a popular exploration policy - Thompson sampling. We show that there are different ways of implementing it in multi-result ranking systems, each having different semantic interpretation and leading to different results in terms of sustained click-through-rate (CTR) loss and expected model improvement. In particular, we demonstrate that Thompson sampling can act as an online learner optimizing CTR, which in some cases can lead to an interesting outcome: lift in CTR during exploration. The observation is important for production systems as it suggests that one can get both valuable exploration data to improve ranking performance on the long run, and at the same time increase CTR while exploration lasts.


  Access Paper or Ask Questions

Symmetric Collaborative Filtering Using the Noisy Sensor Model

Jan 10, 2013
Rita Sharma, David L Poole

Collaborative filtering is the process of making recommendations regarding the potential preference of a user, for example shopping on the Internet, based on the preference ratings of the user and a number of other users for various items. This paper considers collaborative filtering based on explicitmulti-valued ratings. To evaluate the algorithms, weconsider only {em pure} collaborative filtering, using ratings exclusively, and no other information about the people or items.Our approach is to predict a user's preferences regarding a particularitem by using other people who rated that item and other items ratedby the user as noisy sensors. The noisy sensor model uses Bayes' theorem to compute the probability distribution for the user'srating of a new item. We give two variant models: in one, we learn a{em classical normal linear regression} model of how users rate items; in another,we assume different users rate items the same, but the accuracy of thesensors needs to be learned. We compare these variant models withstate-of-the-art techniques and show how they are significantly better,whether a user has rated only two items or many. We reportempirical results using the EachMovie database footnote{http://research.compaq.com/SRC/eachmovie/} of movie ratings. Wealso show that by considering items similarity along with theusers similarity, the accuracy of the prediction increases.

* Appears in Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence (UAI2001) 

  Access Paper or Ask Questions

<<
379
380
381
382
383
384
385
386
387
388
389
390
391
>>