Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Effective Context and Fragment Feature Usage for Named Entity Recognition

Apr 21, 2019
Nargiza Nosirova, Mingbin Xu, Hui Jiang

In this paper, we explore a new approach to named entity recognition (NER) with the goal of learning from context and fragment features more effectively, contributing to the improvement of overall recognition performance. We use the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left-right contexts into a fixed-size representation. Next, we organize the context and fragment features into groups, and feed each feature group to dedicated fully-connected layers. Finally, we merge each group's final dedicated layers and add a shared layer leading to a single output. The outcome of our experiments show that, given only tokenized text and trained word embeddings, our system outperforms our baseline models, and is competitive to the state-of-the-arts of various well-known NER tasks.

* 7 pages, 1 figure, 7 tables (Rejected by EMNLP 2018 with score 3-4-4). arXiv admin note: text overlap with arXiv:1904.03300 

  Access Paper or Ask Questions

Unsupervised Discovery of Multimodal Links in Multi-Image, Multi-Sentence Documents

Apr 16, 2019
Jack Hessel, Lillian Lee, David Mimno

Images and text co-occur everywhere on the web, but explicit links between images and sentences (or other intra-document textual units) are often not annotated by users. We present algorithms that successfully discover image-sentence relationships without relying on any explicit multimodal annotation. We explore several variants of our approach on seven datasets of varying difficulty, ranging from images that were captioned post hoc by crowd-workers to naturally-occurring user-generated multimodal documents, wherein correspondences between illustrations and individual textual units may not be one-to-one. We find that a structured training objective based on identifying whether sets of images and sentences co-occur in documents can be sufficient to predict links between specific sentences and specific images within the same document at test time.

* Working paper; comments welcome. Code and data available at www.cs.cornell.edu/~jhessel 

  Access Paper or Ask Questions

Unsupervised Singing Voice Conversion

Apr 13, 2019
Eliya Nachmani, Lior Wolf

We present a deep learning method for singing voice conversion. The proposed network is not conditioned on the text or on the notes, and it directly converts the audio of one singer to the voice of another. Training is performed without any form of supervision: no lyrics or any kind of phonetic features, no notes, and no matching samples between singers. The proposed network employs a single CNN encoder for all singers, a single WaveNet decoder, and a classifier that enforces the latent representation to be singer-agnostic. Each singer is represented by one embedding vector, which the decoder is conditioned on. In order to deal with relatively small datasets, we propose a new data augmentation scheme, as well as new training losses and protocols that are based on backtranslation. Our evaluation presents evidence that the conversion produces natural signing voices that are highly recognizable as the target singer.

* Submitted to Interspeech 2019 

  Access Paper or Ask Questions

Neural Sequential Phrase Grounding (SeqGROUND)

Mar 18, 2019
Pelin Dogan, Leonid Sigal, Markus Gross

We propose an end-to-end approach for phrase grounding in images. Unlike prior methods that typically attempt to ground each phrase independently by building an image-text embedding, our architecture formulates grounding of multiple phrases as a sequential and contextual process. Specifically, we encode region proposals and all phrases into two stacks of LSTM cells, along with so-far grounded phrase-region pairs. These LSTM stacks collectively capture context for grounding of the next phrase. The resulting architecture, which we call SeqGROUND, supports many-to-many matching by allowing an image region to be matched to multiple phrases and vice versa. We show competitive performance on the Flickr30K benchmark dataset and, through ablation studies, validate the efficacy of sequential grounding as well as individual design choices in our model architecture.

* Accepted at CVPR 2019 

  Access Paper or Ask Questions

Relation Extraction using Explicit Context Conditioning

Feb 25, 2019
Gaurav Singh, Parminder Bhatia

Relation Extraction (RE) aims to label relations between groups of marked entities in raw text. Most current RE models learn context-aware representations of the target entities that are then used to establish relation between them. This works well for intra-sentence RE and we call them first-order relations. However, this methodology can sometimes fail to capture complex and long dependencies. To address this, we hypothesize that at times two target entities can be explicitly connected via a context token. We refer to such indirect relations as second-order relations and describe an efficient implementation for computing them. These second-order relation scores are then combined with first-order relation scores. Our empirical results show that the proposed method leads to state-of-the-art performance over two biomedical datasets.

* Accepted for Publication at NAACL 2019 

  Access Paper or Ask Questions

OpenHowNet: An Open Sememe-based Lexical Knowledge Base

Jan 28, 2019
Fanchao Qi, Chenghao Yang, Zhiyuan Liu, Qiang Dong, Maosong Sun, Zhendong Dong

In this paper, we present an open sememe-based lexical knowledge base OpenHowNet. Based on well-known HowNet, OpenHowNet comprises three components: core data which is composed of more than 100 thousand senses annotated with sememes, OpenHowNet Web which gives a brief introduction to OpenHowNet as well as provides online exhibition of OpenHowNet information, and OpenHowNet API which includes several useful APIs such as accessing OpenHowNet core data and drawing sememe tree structures of senses. In the main text, we first give some backgrounds including definition of sememe and details of HowNet. And then we introduce some previous HowNet and sememe-based research works. Last but not least, we detail the constituents of OpenHowNet and their basic features and functionalities. Additionally, we briefly make a summary and list some future works.

* 4 pages, 3 figures 

  Access Paper or Ask Questions

A Convolutional Neural Network based Live Object Recognition System as Blind Aid

Nov 26, 2018
Kedar Potdar, Chinmay D. Pai, Sukrut Akolkar

This paper introduces a live object recognition system that serves as a blind aid. Visually impaired people heavily rely on their other senses such as touch and auditory signals for understanding the environment around them. The act of knowing what object is in front of the blind person without touching it (by hand or some other tool) is very difficult. In some cases, the physical contact between the person and object can be dangerous, and even lethal. This project employs a Convolutional Neural Network for recognition of pre-trained objects on the ImageNet dataset. A camera, aligned with the system's predetermined orientation serves as input to the computer system, which has the object recognition Neural Network deployed to carry out real-time object detection. Output from the network can then be parsed to present to the visually impaired person either in the form of audio or Braille text.


  Access Paper or Ask Questions

End-to-End Retrieval in Continuous Space

Nov 19, 2018
Daniel Gillick, Alessandro Presta, Gaurav Singh Tomar

Most text-based information retrieval (IR) systems index objects by words or phrases. These discrete systems have been augmented by models that use embeddings to measure similarity in continuous space. But continuous-space models are typically used just to re-rank the top candidates. We consider the problem of end-to-end continuous retrieval, where standard approximate nearest neighbor (ANN) search replaces the usual discrete inverted index, and rely entirely on distances between learned embeddings. By training simple models specifically for retrieval, with an appropriate model architecture, we improve on a discrete baseline by 8% and 26% (MAP) on two similar-question retrieval tasks. We also discuss the problem of evaluation for retrieval systems, and show how to modify existing pairwise similarity datasets for this purpose.


  Access Paper or Ask Questions

Neural Multi-Task Learning for Citation Function and Provenance

Nov 18, 2018
Xuan Su, Animesh Prasad, Min-Yen Kan, Kazunari Sugiyama

Citation function and provenance are two cornerstone tasks in citation analysis. Given a citation, the former task determines its rhetorical role, while the latter locates the text in the cited paper that contains the relevant cited information. We hypothesize that these two tasks are synergistically related, and build a model that validates this claim. For both tasks, we show that a single-layer convolutional neural network (CNN) is able to surpass the performance of existing state-of-the-art baselines. More importantly, we show that the two tasks are indeed synergistic: by training both tasks in one go using multi-task learning, we demonstrate additional performance gains in both tasks. Altogether, our contributions outperform the current state-of-the-arts by ~2% and ~7%, with statistical significance for citation function and citation provenance prediction tasks, respectively.


  Access Paper or Ask Questions

<<
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
>>