Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Natural Question Generation with Reinforcement Learning Based Graph-to-Sequence Model

Oct 19, 2019
Yu Chen, Lingfei Wu, Mohammed J. Zaki

Natural question generation (QG) aims to generate questions from a passage and an answer. In this paper, we propose a novel reinforcement learning (RL) based graph-to-sequence (Graph2Seq) model for QG. Our model consists of a Graph2Seq generator where a novel Bidirectional Gated Graph Neural Network is proposed to embed the passage, and a hybrid evaluator with a mixed objective combining both cross-entropy and RL losses to ensure the generation of syntactically and semantically valid text. The proposed model outperforms previous state-of-the-art methods by a large margin on the SQuAD dataset.

* 4 pages. Accepted at the NeurIPS 2019 Workshop on Graph Representation Learning (NeurIPS GRL 2019). Final Version 

  Access Paper or Ask Questions

Zero-shot transfer for implicit discourse relation classification

Jul 30, 2019
Murathan Kurfalı, Robert Östling

Automatically classifying the relation between sentences in a discourse is a challenging task, in particular when there is no overt expression of the relation. It becomes even more challenging by the fact that annotated training data exists only for a small number of languages, such as English and Chinese. We present a new system using zero-shot transfer learning for implicit discourse relation classification, where the only resource used for the target language is unannotated parallel text. This system is evaluated on the discourse-annotated TED-MDB parallel corpus, where it obtains good results for all seven languages using only English training data.

* to be presented at SIGDIAL 2019 

  Access Paper or Ask Questions

Guidelines for Responsible and Human-Centered Use of Explainable Machine Learning

Jun 08, 2019
Patrick Hall

Explainable machine learning (ML) has been implemented in numerous open source and proprietary software packages and explainable ML is an important aspect of commercial predictive modeling. However, explainable ML can be misused, particularly as a faulty safeguard for harmful black-boxes, e.g. fairwashing, and for other malevolent purposes like model stealing. This text discusses definitions, examples, and guidelines that promote a holistic and human-centered approach to ML which includes interpretable (i.e. white-box ) models and explanatory, debugging, and disparate impact analysis techniques.

* Errata and updates available here: https://github.com/jphall663/responsible_xai 

  Access Paper or Ask Questions

Do Human Rationales Improve Machine Explanations?

May 31, 2019
Julia Strout, Ye Zhang, Raymond J. Mooney

Work on "learning with rationales" shows that humans providing explanations to a machine learning system can improve the system's predictive accuracy. However, this work has not been connected to work in "explainable AI" which concerns machines explaining their reasoning to humans. In this work, we show that learning with rationales can also improve the quality of the machine's explanations as evaluated by human judges. Specifically, we present experiments showing that, for CNN- based text classification, explanations generated using "supervised attention" are judged superior to explanations generated using normal unsupervised attention.


  Access Paper or Ask Questions

Classifying textual data: shallow, deep and ensemble methods

Feb 18, 2019
Laura Anderlucci, Lucia Guastadisegni, Cinzia Viroli

This paper focuses on a comparative evaluation of the most common and modern methods for text classification, including the recent deep learning strategies and ensemble methods. The study is motivated by a challenging real data problem, characterized by high-dimensional and extremely sparse data, deriving from incoming calls to the customer care of an Italian phone company. We will show that deep learning outperforms many classical (shallow) strategies but the combination of shallow and deep learning methods in a unique ensemble classifier may improve the robustness and the accuracy of "single" classification methods.


  Access Paper or Ask Questions

Zero-Shot Anticipation for Instructional Activities

Dec 06, 2018
Fadime Sener, Angela Yao

How can we teach a robot to predict what will happen next for an activity it has never seen before? We address the problem of zero-shot anticipation by presenting a hierarchical model that generalizes instructional knowledge from large-scale text-corpora and transfers the knowledge to the visual domain. Given a portion of an instructional video, our model predicts coherent and plausible actions multiple steps into the future, all in rich natural language. To demonstrate the anticipation capabilities of our model, we introduce the Tasty Videos dataset, a collection of 2511 recipes for zero-shot learning, recognition and anticipation.


  Access Paper or Ask Questions

Structured Neural Summarization

Nov 05, 2018
Patrick Fernandes, Miltiadis Allamanis, Marc Brockschmidt

Summarization of long sequences into a concise statement is a core problem in natural language processing, requiring non-trivial understanding of the input. Based on the promising results of graph neural networks on highly structured data, we develop a framework to extend existing sequence encoders with a graph component that can reason about long-distance relationships in weakly structured data such as text. In an extensive evaluation, we show that the resulting hybrid sequence-graph models outperform both pure sequence models as well as pure graph models on a range of summarization tasks.


  Access Paper or Ask Questions

Training Deeper Neural Machine Translation Models with Transparent Attention

Sep 04, 2018
Ankur Bapna, Mia Xu Chen, Orhan Firat, Yuan Cao, Yonghui Wu

While current state-of-the-art NMT models, such as RNN seq2seq and Transformers, possess a large number of parameters, they are still shallow in comparison to convolutional models used for both text and vision applications. In this work we attempt to train significantly (2-3x) deeper Transformer and Bi-RNN encoders for machine translation. We propose a simple modification to the attention mechanism that eases the optimization of deeper models, and results in consistent gains of 0.7-1.1 BLEU on the benchmark WMT'14 English-German and WMT'15 Czech-English tasks for both architectures.

* To appear in EMNLP 2018 

  Access Paper or Ask Questions

Multiobjective Optimization Training of PLDA for Speaker Verification

Aug 25, 2018
Liang He, Xianhong Chen, Can Xu, Jia Liu

Most current state-of-the-art text-independent speaker verification systems take probabilistic linear discriminant analysis (PLDA) as their backend classifiers. The model parameters of PLDA is often estimated by maximizing the log-likelihood function. This training procedure focuses on increasing the log-likelihood, while ignoring the distinction between speakers. In order to better distinguish speakers, we propose a multiobjective optimization training for PLDA. Experiment results show that the proposed method has more than 10% relative performance improvement for both EER and the MinDCF on the NIST SRE 2014 i-vector challenge dataset.


  Access Paper or Ask Questions

Deep Learning Based Natural Language Processing for End to End Speech Translation

Aug 09, 2018
Sarvesh Patil

Deep Learning methods employ multiple processing layers to learn hierarchial representations of data. They have already been deployed in a humongous number of applications and have produced state-of-the-art results. Recently with the growth in processing power of computers to be able to do high dimensional tensor calculations, Natural Language Processing (NLP) applications have been given a significant boost in terms of efficiency as well as accuracy. In this paper, we will take a look at various signal processing techniques and then application of them to produce a speech-to-text system using Deep Recurrent Neural Networks.

* 4 pages, 6 figures 

  Access Paper or Ask Questions

<<
934
935
936
937
938
939
940
941
942
943
944
945
946
>>