Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

High Quality Real-Time Structured Debate Generation

Dec 01, 2020
Eric Bolton, Alex Calderwood, Niles Christensen, Jerome Kafrouni, Iddo Drori

Automatically generating debates is a challenging task that requires an understanding of arguments and how to negate or support them. In this work we define debate trees and paths for generating debates while enforcing a high level structure and grammar. We leverage a large corpus of tree-structured debates that have metadata associated with each argument. We develop a framework for generating plausible debates which is agnostic to the sentence embedding model. Our results demonstrate the ability to generate debates in real-time on complex topics at a quality that is close to humans, as evaluated by the style, content, and strategy metrics used for judging competitive human debates. In the spirit of reproducible research we make our data, models, and code publicly available.


  Access Paper or Ask Questions

Contract Scheduling With Predictions

Nov 24, 2020
Spyros Angelopoulos, Shahin Kamali

Contract scheduling is a general technique that allows to design a system with interruptible capabilities, given an algorithm that is not necessarily interruptible. Previous work on this topic has largely assumed that the interruption is a worst-case deadline that is unknown to the scheduler. In this work, we study the setting in which there is a potentially erroneous prediction concerning the interruption. Specifically, we consider the setting in which the prediction describes the time that the interruption occurs, as well as the setting in which the prediction is obtained as a response to a single or multiple binary queries. For both settings, we investigate tradeoffs between the robustness (i.e., the worst-case performance assuming adversarial prediction) and the consistency (i.e, the performance assuming that the prediction is error-free), both from the side of positive and negative results.


  Access Paper or Ask Questions

Recent Trends in the Use of Deep Learning Models for Grammar Error Handling

Sep 04, 2020
Mina Naghshnejad, Tarun Joshi, Vijayan N. Nair

Grammar error handling (GEH) is an important topic in natural language processing (NLP). GEH includes both grammar error detection and grammar error correction. Recent advances in computation systems have promoted the use of deep learning (DL) models for NLP problems such as GEH. In this survey we focus on two main DL approaches for GEH: neural machine translation models and editor models. We describe the three main stages of the pipeline for these models: data preparation, training, and inference. Additionally, we discuss different techniques to improve the performance of these models at each stage of the pipeline. We compare the performance of different models and conclude with proposed future directions.


  Access Paper or Ask Questions

COVID-Twitter-BERT: A Natural Language Processing Model to Analyse COVID-19 Content on Twitter

May 15, 2020
Martin Müller, Marcel Salathé, Per E Kummervold

In this work, we release COVID-Twitter-BERT (CT-BERT), a transformer-based model, pretrained on a large corpus of Twitter messages on the topic of COVID-19. Our model shows a 10-30% marginal improvement compared to its base model, BERT-Large, on five different classification datasets. The largest improvements are on the target domain. Pretrained transformer models, such as CT-BERT, are trained on a specific target domain and can be used for a wide variety of natural language processing tasks, including classification, question-answering and chatbots. CT-BERT is optimised to be used on COVID-19 content, in particular social media posts from Twitter.


  Access Paper or Ask Questions

Machine Education: Designing semantically ordered and ontologically guided modular neural networks

Feb 07, 2020
Hussein A. Abbass, Sondoss Elsawah, Eleni Petraki, Robert Hunjet

The literature on machine teaching, machine education, and curriculum design for machines is in its infancy with sparse papers on the topic primarily focusing on data and model engineering factors to improve machine learning. In this paper, we first discuss selected attempts to date on machine teaching and education. We then bring theories and methodologies together from human education to structure and mathematically define the core problems in lesson design for machine education and the modelling approaches required to support the steps for machine education. Last, but not least, we offer an ontology-based methodology to guide the development of lesson plans to produce transparent and explainable modular learning machines, including neural networks.

* IEEE Symposium Series on Computational Intelligence, 2019 

  Access Paper or Ask Questions

Finite sample properties of parametric MMD estimation: robustness to misspecification and dependence

Dec 16, 2019
Badr-Eddine Chérief-Abdellatif, Pierre Alquier

Many works in statistics aim at designing a universal estimation procedure. This question is of major interest, in particular because it leads to robust estimators, a very hot topic in statistics and machine learning. In this paper, we tackle the problem of universal estimation using a minimum distance estimator presented in Briol et al. (2019) based on the Maximum Mean Discrepancy. We show that the estimator is robust to both dependence and to the presence of outliers in the dataset. We also highlight the connections that may exist with minimum distance estimators using L2-distance. Finally, we provide a theoretical study of the stochastic gradient descent algorithm used to compute the estimator, and we support our findings with numerical simulations.


  Access Paper or Ask Questions

Assessing Partisan Traits of News Text Attributions

Jan 25, 2019
Logan Martel, Edward Newell, Drew Margolin, Derek Ruths

On the topic of journalistic integrity, the current state of accurate, impartial news reporting has garnered much debate in context to the 2016 US Presidential Election. In pursuit of computational evaluation of news text, the statements (attributions) ascribed by media outlets to sources provide a common category of evidence on which to operate. In this paper, we develop an approach to compare partisan traits of news text attributions and apply it to characterize differences in statements ascribed to candidate, Hilary Clinton, and incumbent President, Donald Trump. In doing so, we present a model trained on over 600 in-house annotated attributions to identify each candidate with accuracy > 88%. Finally, we discuss insights from its performance for future research.

* Honours Thesis completed for McGill University B.Sc. Software Engineering Supervised under Professor Derek Ruths Network Dynamics Lab 

  Access Paper or Ask Questions

Using Sentiment Induction to Understand Variation in Gendered Online Communities

Nov 16, 2018
Li Lucy, Julia Mendelsohn

We analyze gendered communities defined in three different ways: text, users, and sentiment. Differences across these representations reveal facets of communities' distinctive identities, such as social group, topic, and attitudes. Two communities may have high text similarity but not user similarity or vice versa, and word usage also does not vary according to a clearcut, binary perspective of gender. Community-specific sentiment lexicons demonstrate that sentiment can be a useful indicator of words' social meaning and community values, especially in the context of discussion content and user demographics. Our results show that social platforms such as Reddit are active settings for different constructions of gender.

* 11 pages, 4 figures, to appear in proceedings of the Society for Computation in Linguistics (SCIL 2019) 

  Access Paper or Ask Questions

From Random to Supervised: A Novel Dropout Mechanism Integrated with Global Information

Oct 10, 2018
Hengru Xu, Shen Li, Renfen Hu, Si Li, Sheng Gao

Dropout is used to avoid overfitting by randomly dropping units from the neural networks during training. Inspired by dropout, this paper presents GI-Dropout, a novel dropout method integrating with global information to improve neural networks for text classification. Unlike the traditional dropout method in which the units are dropped randomly according to the same probability, we aim to use explicit instructions based on global information of the dataset to guide the training process. With GI-Dropout, the model is supposed to pay more attention to inapparent features or patterns. Experiments demonstrate the effectiveness of the dropout with global information on seven text classification tasks, including sentiment analysis and topic classification.


  Access Paper or Ask Questions

Data-Driven Dialogue Systems for Social Agents

Sep 10, 2017
Kevin K. Bowden, Shereen Oraby, Amita Misra, Jiaqi Wu, Stephanie Lukin

In order to build dialogue systems to tackle the ambitious task of holding social conversations, we argue that we need a data driven approach that includes insight into human conversational chit chat, and which incorporates different natural language processing modules. Our strategy is to analyze and index large corpora of social media data, including Twitter conversations, online debates, dialogues between friends, and blog posts, and then to couple this data retrieval with modules that perform tasks such as sentiment and style analysis, topic modeling, and summarization. We aim for personal assistants that can learn more nuanced human language, and to grow from task-oriented agents to more personable social bots.

* IWSDS 2017 

  Access Paper or Ask Questions

<<
214
215
216
217
218
219
220
221
222
223
224
225
226
>>