Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Semantic TrueLearn: Using Semantic Knowledge Graphs in Recommendation Systems

Dec 08, 2021
Sahan Bulathwela, María Pérez-Ortiz, Emine Yilmaz, John Shawe-Taylor

In informational recommenders, many challenges arise from the need to handle the semantic and hierarchical structure between knowledge areas. This work aims to advance towards building a state-aware educational recommendation system that incorporates semantic relatedness between knowledge topics, propagating latent information across semantically related topics. We introduce a novel learner model that exploits this semantic relatedness between knowledge components in learning resources using the Wikipedia link graph, with the aim to better predict learner engagement and latent knowledge in a lifelong learning scenario. In this sense, Semantic TrueLearn builds a humanly intuitive knowledge representation while leveraging Bayesian machine learning to improve the predictive performance of the educational engagement. Our experiments with a large dataset demonstrate that this new semantic version of TrueLearn algorithm achieves statistically significant improvements in terms of predictive performance with a simple extension that adds semantic awareness to the model.

* Presented at the First International Workshop on Joint Use of Probabilistic Graphical Models and Ontology at Conference on Knowledge Graph and Semantic Web 2021 

  Access Paper or Ask Questions

Learning by Self-Explanation, with Application to Neural Architecture Search

Dec 23, 2020
Ramtin Hosseini, Pengtao Xie

Learning by self-explanation, where students explain a learned topic to themselves for deepening their understanding of this topic, is a broadly used methodology in human learning and shows great effectiveness in improving learning outcome. We are interested in investigating whether this powerful learning technique can be borrowed from humans to improve the learning abilities of machines. We propose a novel learning approach called learning by self-explanation (LeaSE). In our approach, an explainer model improves its learning ability by trying to clearly explain to an audience model regarding how a prediction outcome is made. We propose a multi-level optimization framework to formulate LeaSE which involves four stages of learning: explainer learns; explainer explains; audience learns; explainer and audience validate themselves. We develop an efficient algorithm to solve the LeaSE problem. We apply our approach to neural architecture search on CIFAR-100, CIFAR-10, and ImageNet. The results demonstrate the effectiveness of our method.

* arXiv admin note: substantial text overlap with arXiv:2011.15102, arXiv:2012.04863, arXiv:2012.12502 

  Access Paper or Ask Questions

Machine learning on Big Data from Twitter to understand public reactions to COVID-19

May 22, 2020
Jia Xue, Junxiang Chen, Chen Chen, ChengDa Zheng, Tingshao Zhu

The study aims to understand Twitter users' discussions and reactions about the COVID-19. We use machine learning techniques to analyze about 1.8 million Tweets messages related to coronavirus collected from January 20th to March 7th, 2020. A total of salient 11 topics are identified and then categorized into 10 themes, such as "cases outside China (worldwide)," "COVID-19 outbreak in South Korea," "early signs of the outbreak in New York," "Diamond Princess cruise," "economic impact," "Preventive/Protective measures," "authorities," and "supply chain". Results do not reveal treatment and/or symptoms related messages as a prevalent topic on Twitter. We also run sentiment analysis and the results show that trust for the authorities remained a prevalent emotion, but mixed feelings of trust for authorities, fear for the outbreak, and anticipation for the potential preventive measures will be taken are identified. Implications and limitations of the study are also discussed.


  Access Paper or Ask Questions

Decentralized Multi-Agent Reinforcement Learning with Networked Agents: Recent Advances

Dec 09, 2019
Kaiqing Zhang, Zhuoran Yang, Tamer Başar

Multi-agent reinforcement learning (MARL) has long been a significant and everlasting research topic in both machine learning and control. With the recent development of (single-agent) deep RL, there is a resurgence of interests in developing new MARL algorithms, especially those that are backed by theoretical analysis. In this paper, we review some recent advances a sub-area of this topic: decentralized MARL with networked agents. Specifically, multiple agents perform sequential decision-making in a common environment, without the coordination of any central controller. Instead, the agents are allowed to exchange information with their neighbors over a communication network. Such a setting finds broad applications in the control and operation of robots, unmanned vehicles, mobile sensor networks, and smart grid. This review is built upon several our research endeavors in this direction, together with some progresses made by other researchers along the line. We hope this review to inspire the devotion of more research efforts to this exciting yet challenging area.

* This is a invited submission to a Special Issue of the Journal of Frontiers of Information Technology & Electronic Engineering (FITEE). Most of the contents are based on the Sec. 4 in our recent overview arXiv:1911.10635, with focus on the setting of decentralized MARL with networked agents 

  Access Paper or Ask Questions

Corpus Wide Argument Mining -- a Working Solution

Nov 25, 2019
Liat Ein-Dor, Eyal Shnarch, Lena Dankin, Alon Halfon, Benjamin Sznajder, Ariel Gera, Carlos Alzate, Martin Gleize, Leshem Choshen, Yufang Hou, Yonatan Bilu, Ranit Aharonov, Noam Slonim

One of the main tasks in argument mining is the retrieval of argumentative content pertaining to a given topic. Most previous work addressed this task by retrieving a relatively small number of relevant documents as the initial source for such content. This line of research yielded moderate success, which is of limited use in a real-world system. Furthermore, for such a system to yield a comprehensive set of relevant arguments, over a wide range of topics, it requires leveraging a large and diverse corpus in an appropriate manner. Here we present a first end-to-end high-precision, corpus-wide argument mining system. This is made possible by combining sentence-level queries over an appropriate indexing of a very large corpus of newspaper articles, with an iterative annotation scheme. This scheme addresses the inherent label bias in the data and pinpoints the regions of the sample space whose manual labeling is required to obtain high-precision among top-ranked candidates.

* AAAI 2020 

  Access Paper or Ask Questions

Overlapping Clustering Models, and One (class) SVM to Bind Them All

Nov 05, 2018
Xueyu Mao, Purnamrita Sarkar, Deepayan Chakrabarti

People belong to multiple communities, words belong to multiple topics, and books cover multiple genres; overlapping clusters are commonplace. Many existing overlapping clustering methods model each person (or word, or book) as a non-negative weighted combination of "exemplars" who belong solely to one community, with some small noise. Geometrically, each person is a point on a cone whose corners are these exemplars. This basic form encompasses the widely used Mixed Membership Stochastic Blockmodel of networks (Airoldi et al., 2008) and its degree-corrected variants (Jin et al., 2017), as well as topic models such as LDA (Blei et al., 2003). We show that a simple one-class SVM yields provably consistent parameter inference for all such models, and scales to large datasets. Experimental results on several simulated and real datasets show our algorithm (called SVM-cone) is both accurate and scalable.

* In NIPS 2018 

  Access Paper or Ask Questions

From handcrafted to deep local invariant features

Jul 26, 2018
Gabriela Csurka, Martin Humenberger

The aim of this paper is to present a comprehensive overview of the evolution of local features from handcrafted to deep learning based methods, followed by a discussion of several benchmark and evaluation papers about this topic. Our investigations are motivated by 3D reconstruction problems, where the precise location of the features are important. During the description of the methods, we highlight and explain challenges of feature extraction and potential ways to overcome them. We first present traditional handcrafted methods, followed by data driven learning based approaches, and finally detail deep learning based methods. We are convinced that this evolutionary presentation will help the reader to fully understand the topic of image and region description in order to make best use of it in modern computer vision applications. In other words, understanding traditional methods and their motivation will help understanding modern approaches and how machine learning is used to improve the results. We also provide comprehensive references to most relevant literature and code.


  Access Paper or Ask Questions

Measuring Conversational Productivity in Child Forensic Interviews

Jun 08, 2018
Victor Ardulov, Manoj Kumar, Shanna Williams, Thomas Lyon, Shrikanth Narayanan

Child Forensic Interviewing (FI) presents a challenge for effective information retrieval and decision making. The high stakes associated with the process demand that expert legal interviewers are able to effectively establish a channel of communication and elicit substantive knowledge from the child-client while minimizing potential for experiencing trauma. As a first step toward computationally modeling and producing quality spoken interviewing strategies and a generalized understanding of interview dynamics, we propose a novel methodology to computationally model effectiveness criteria, by applying summarization and topic modeling techniques to objectively measure and rank the responsiveness and conversational productivity of a child during FI. We score information retrieval by constructing an agenda to represent general topics of interest and measuring alignment with a given response and leveraging lexical entrainment for responsiveness. For comparison, we present our methods along with traditional metrics of evaluation and discuss the use of prior information for generating situational awareness.


  Access Paper or Ask Questions

Data Mining for Actionable Knowledge: A Survey

Jan 27, 2005
Zengyou He, Xiaofei Xu, Shengchun Deng

The data mining process consists of a series of steps ranging from data cleaning, data selection and transformation, to pattern evaluation and visualization. One of the central problems in data mining is to make the mined patterns or knowledge actionable. Here, the term actionable refers to the mined patterns suggest concrete and profitable actions to the decision-maker. That is, the user can do something to bring direct benefits (increase in profits, reduction in cost, improvement in efficiency, etc.) to the organization's advantage. However, there has been written no comprehensive survey available on this topic. The goal of this paper is to fill the void. In this paper, we first present two frameworks for mining actionable knowledge that are inexplicitly adopted by existing research methods. Then we try to situate some of the research on this topic from two different viewpoints: 1) data mining tasks and 2) adopted framework. Finally, we specify issues that are either not addressed or insufficiently studied yet and conclude the paper.

* 11 pages 

  Access Paper or Ask Questions

Detecting COVID-19 Conspiracy Theories with Transformers and TF-IDF

May 01, 2022
Haoming Guo, Tianyi Huang, Huixuan Huang, Mingyue Fan, Gerald Friedland

The sharing of fake news and conspiracy theories on social media has wide-spread negative effects. By designing and applying different machine learning models, researchers have made progress in detecting fake news from text. However, existing research places a heavy emphasis on general, common-sense fake news, while in reality fake news often involves rapidly changing topics and domain-specific vocabulary. In this paper, we present our methods and results for three fake news detection tasks at MediaEval benchmark 2021 that specifically involve COVID-19 related topics. We experiment with a group of text-based models including Support Vector Machines, Random Forest, BERT, and RoBERTa. We find that a pre-trained transformer yields the best validation results, but a randomly initialized transformer with smart design can also be trained to reach accuracies close to that of the pre-trained transformer.


  Access Paper or Ask Questions

<<
119
120
121
122
123
124
125
126
127
128
129
130
131
>>