Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Recommendation": models, code, and papers

Do we need to go Deep? Knowledge Tracing with Big Data

Jan 20, 2021
Varun Mandalapu, Jiaqi Gong, Lujie Chen

Interactive Educational Systems (IES) enabled researchers to trace student knowledge in different skills and provide recommendations for a better learning path. To estimate the student knowledge and further predict their future performance, the interest in utilizing the student interaction data captured by IES to develop learner performance models is increasing rapidly. Moreover, with the advances in computing systems, the amount of data captured by these IES systems is also increasing that enables deep learning models to compete with traditional logistic models and Markov processes. However, it is still not empirically evident if these deep models outperform traditional models on the current scale of datasets with millions of student interactions. In this work, we adopt EdNet, the largest student interaction dataset publicly available in the education domain, to understand how accurately both deep and traditional models predict future student performances. Our work observes that logistic regression models with carefully engineered features outperformed deep models from extensive experimentation. We follow this analysis with interpretation studies based on Locally Interpretable Model-agnostic Explanation (LIME) to understand the impact of various features on best performing model pre-dictions.

* 9 Pages, 4 figures, AAAI Workshop on AI in Education (Imagining Post-COVID Education with AI) 

  Access Paper or Ask Questions

Big Networks: A Survey

Aug 09, 2020
Hayat Dino Bedru, Shuo Yu, Xinru Xiao, Da Zhang, Liangtian Wan, He Guo, Feng Xia

A network is a typical expressive form of representing complex systems in terms of vertices and links, in which the pattern of interactions amongst components of the network is intricate. The network can be static that does not change over time or dynamic that evolves through time. The complication of network analysis is different under the new circumstance of network size explosive increasing. In this paper, we introduce a new network science concept called big network. Big networks are generally in large-scale with a complicated and higher-order inner structure. This paper proposes a guideline framework that gives an insight into the major topics in the area of network science from the viewpoint of a big network. We first introduce the structural characteristics of big networks from three levels, which are micro-level, meso-level, and macro-level. We then discuss some state-of-the-art advanced topics of big network analysis. Big network models and related approaches, including ranking methods, partition approaches, as well as network embedding algorithms are systematically introduced. Some typical applications in big networks are then reviewed, such as community detection, link prediction, recommendation, etc. Moreover, we also pinpoint some critical open issues that need to be investigated further.

* Computer Science Review, Volume 37, August 2020, 100247 
* 69 pages, 4 figures 

  Access Paper or Ask Questions

Scalable Bayesian Preference Learning for Crowds

Dec 11, 2019
Edwin Simpson, Iryna Gurevych

We propose a scalable Bayesian preference learning method for jointly predicting the preferences of individuals as well as the consensus of a crowd from pairwise labels. Peoples' opinions often differ greatly, making it difficult to predict their preferences from small amounts of personal data. Individual biases also make it harder to infer the consensus of a crowd when there are few labels per item. We address these challenges by combining matrix factorisation with Gaussian processes, using a Bayesian approach to account for uncertainty arising from noisy and sparse data. Our method exploits input features, such as text embeddings and user metadata, to predict preferences for new items and users that are not in the training set. As previous solutions based on Gaussian processes do not scale to large numbers of users, items or pairwise labels, we propose a stochastic variational inference approach that limits computational and memory costs. Our experiments on a recommendation task show that our method is competitive with previous approaches despite our scalable inference approximation. We demonstrate the method's scalability on a natural language processing task with thousands of users and items, and show improvements over the state of the art on this task. We make our software publicly available for future work.


  Access Paper or Ask Questions

Motivating the Rules of the Game for Adversarial Example Research

Jul 20, 2018
Justin Gilmer, Ryan P. Adams, Ian Goodfellow, David Andersen, George E. Dahl

Advances in machine learning have led to broad deployment of systems with impressive performance on important problems. Nonetheless, these systems can be induced to make errors on data that are surprisingly similar to examples the learned system handles correctly. The existence of these errors raises a variety of questions about out-of-sample generalization and whether bad actors might use such examples to abuse deployed systems. As a result of these security concerns, there has been a flurry of recent papers proposing algorithms to defend against such malicious perturbations of correctly handled examples. It is unclear how such misclassifications represent a different kind of security problem than other errors, or even other attacker-produced examples that have no specific relationship to an uncorrupted input. In this paper, we argue that adversarial example defense papers have, to date, mostly considered abstract, toy games that do not relate to any specific security concern. Furthermore, defense papers have not yet precisely described all the abilities and limitations of attackers that would be relevant in practical security. Towards this end, we establish a taxonomy of motivations, constraints, and abilities for more plausible adversaries. Finally, we provide a series of recommendations outlining a path forward for future work to more clearly articulate the threat model and perform more meaningful evaluation.


  Access Paper or Ask Questions

Active classification with comparison queries

Jun 02, 2017
Daniel M. Kane, Shachar Lovett, Shay Moran, Jiapeng Zhang

We study an extension of active learning in which the learning algorithm may ask the annotator to compare the distances of two examples from the boundary of their label-class. For example, in a recommendation system application (say for restaurants), the annotator may be asked whether she liked or disliked a specific restaurant (a label query); or which one of two restaurants did she like more (a comparison query). We focus on the class of half spaces, and show that under natural assumptions, such as large margin or bounded bit-description of the input examples, it is possible to reveal all the labels of a sample of size $n$ using approximately $O(\log n)$ queries. This implies an exponential improvement over classical active learning, where only label queries are allowed. We complement these results by showing that if any of these assumptions is removed then, in the worst case, $\Omega(n)$ queries are required. Our results follow from a new general framework of active learning with additional queries. We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by $O(1)$ examples (such as comparison queries, each of which is determined by the two compared examples). Our results for half spaces follow by bounding the inference dimension in the cases discussed above.

* 23 pages (not including references), 1 figure. The new version contains a minor fix in the proof of Lemma 4.2 

  Access Paper or Ask Questions

People on Media: Jointly Identifying Credible News and Trustworthy Citizen Journalists in Online Communities

May 09, 2017
Subhabrata Mukherjee, Gerhard Weikum

Media seems to have become more partisan, often providing a biased coverage of news catering to the interest of specific groups. It is therefore essential to identify credible information content that provides an objective narrative of an event. News communities such as digg, reddit, or newstrust offer recommendations, reviews, quality ratings, and further insights on journalistic works. However, there is a complex interaction between different factors in such online communities: fairness and style of reporting, language clarity and objectivity, topical perspectives (like political viewpoint), expertise and bias of community members, and more. This paper presents a model to systematically analyze the different interactions in a news community between users, news, and sources. We develop a probabilistic graphical model that leverages this joint interaction to identify 1) highly credible news articles, 2) trustworthy news sources, and 3) expert users who perform the role of "citizen journalists" in the community. Our method extends CRF models to incorporate real-valued ratings, as some communities have very fine-grained scales that cannot be easily discretized without losing information. To the best of our knowledge, this paper is the first full-fledged analysis of credibility, trust, and expertise in news communities.


  Access Paper or Ask Questions

Liquid Democracy: An Analysis in Binary Aggregation and Diffusion

Jan 19, 2017
Zoé Christoff, Davide Grossi

The paper proposes an analysis of liquid democracy (or, delegable proxy voting) from the perspective of binary aggregation and of binary diffusion models. We show how liquid democracy on binary issues can be embedded into the framework of binary aggregation with abstentions, enabling the transfer of known results about the latter---such as impossibility theorems---to the former. This embedding also sheds light on the relation between delegation cycles in liquid democracy and the probability of collective abstentions, as well as the issue of individual rationality in a delegable proxy voting setting. We then show how liquid democracy on binary issues can be modeled and analyzed also as a specific process of dynamics of binary opinions on networks. These processes---called Boolean DeGroot processes---are a special case of the DeGroot stochastic model of opinion diffusion. We establish the convergence conditions of such processes and show they provide some novel insights on how the effects of delegation cycles and individual rationality could be mitigated within liquid democracy. The study is a first attempt to provide theoretical foundations to the delgable proxy features of the liquid democracy voting system. Our analysis suggests recommendations on how the system may be modified to make it more resilient with respect to the handling of delegation cycles and of inconsistent majorities.

* Working paper 

  Access Paper or Ask Questions

Exponential Family Embeddings

Nov 21, 2016
Maja R. Rudolph, Francisco J. R. Ruiz, Stephan Mandt, David M. Blei

Word embeddings are a powerful approach for capturing semantic similarity among terms in a vocabulary. In this paper, we develop exponential family embeddings, a class of methods that extends the idea of word embeddings to other types of high-dimensional data. As examples, we studied neural data with real-valued observations, count data from a market basket analysis, and ratings data from a movie recommendation system. The main idea is to model each observation conditioned on a set of other observations. This set is called the context, and the way the context is defined is a modeling choice that depends on the problem. In language the context is the surrounding words; in neuroscience the context is close-by neurons; in market basket data the context is other items in the shopping cart. Each type of embedding model defines the context, the exponential family of conditional distributions, and how the latent embedding vectors are shared across data. We infer the embeddings with a scalable algorithm based on stochastic gradient descent. On all three applications - neural activity of zebrafish, users' shopping behavior, and movie ratings - we found exponential family embedding models to be more effective than other types of dimension reduction. They better reconstruct held-out data and find interesting qualitative structure.


  Access Paper or Ask Questions

Sentiment Analysis of Review Datasets Using Naive Bayes and K-NN Classifier

Oct 31, 2016
Lopamudra Dey, Sanjay Chakraborty, Anuraag Biswas, Beepa Bose, Sweta Tiwari

The advent of Web 2.0 has led to an increase in the amount of sentimental content available in the Web. Such content is often found in social media web sites in the form of movie or product reviews, user comments, testimonials, messages in discussion forums etc. Timely discovery of the sentimental or opinionated web content has a number of advantages, the most important of all being monetization. Understanding of the sentiments of human masses towards different entities and products enables better services for contextual advertisements, recommendation systems and analysis of market trends. The focus of our project is sentiment focussed web crawling framework to facilitate the quick discovery of sentimental contents of movie reviews and hotel reviews and analysis of the same. We use statistical methods to capture elements of subjective style and the sentence polarity. The paper elaborately discusses two supervised machine learning algorithms: K-Nearest Neighbour(K-NN) and Naive Bayes and compares their overall accuracy, precisions as well as recall values. It was seen that in case of movie reviews Naive Bayes gave far better results than K-NN but for hotel reviews these algorithms gave lesser, almost same accuracies.

* Volume-8, Issue-4, pp.54-62, 2016 

  Access Paper or Ask Questions

<<
365
366
367
368
369
370
371
372
373
374
375
376
377
>>