Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Comparison of scanned administrative document images

Jan 29, 2020
Elena Andreeva, Vladimir V. Arlazarov, Oleg Slavin, Aleksey Mishev

In this work the methods of comparison of digitized copies of administrative documents were considered. This problem arises, for example, when comparing two copies of documents signed by two parties in order to find possible modifications made by one party, in the banking sector at the conclusion of contracts in paper form. The proposed method of document image comparison is based on a combination of several ways of image comparison of words that are descriptors of text feature points. Testing was conducted on public Payslip Dataset (French). The results showed the high quality and the reliability of finding differences in two images that are versions of the same document.


  Access Paper or Ask Questions

Oversampling Log Messages Using a Sequence Generative Adversarial Network for Anomaly Detection and Classification

Dec 09, 2019
Amir Farzad, T. Aaron Gulliver

Dealing with imbalanced data is one the main challenges in machine/deep learning algorithms for classification. This issue is more important with log message data as it is typically imbalanced and negative logs are rare. In this paper, a model is proposed to generate text log messages using a SeqGAN network. Then features are extracted using an Autoencoder and anomaly detection and classification is done using a GRU network. The proposed model is evaluated with two imbalanced log data sets, namely BGL and Openstack. Results are presented which show that oversampling and balancing data increases the accuracy of anomaly detection and classification.

* 23 pages, 4 figures, 2 tables. arXiv admin note: text overlap with arXiv:1911.08744 

  Access Paper or Ask Questions

Findings of the Third Workshop on Neural Generation and Translation

Oct 30, 2019
Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh

This document describes the findings of the Third Workshop on Neural Generation and Translation, held in concert with the annual conference of the Empirical Methods in Natural Language Processing (EMNLP 2019). First, we summarize the research trends of papers presented in the proceedings. Second, we describe the results of the two shared tasks 1) efficient neural machine translation (NMT) where participants were tasked with creating NMT systems that are both accurate and efficient, and 2) document-level generation and translation (DGT) where participants were tasked with developing systems that generate summaries from structured data, potentially with assistance from text in another language.

* Fixed the metadata (author list) 

  Access Paper or Ask Questions

I Stand With You: Using Emojis to Study Solidarity in Crisis Events

Jul 19, 2019
Sashank Santhanam, Vidhushini Srinivasan, Shaina Glass, Samira Shaikh

We study how emojis are used to express solidarity in social media in the context of two major crisis events - a natural disaster, Hurricane Irma in 2017 and terrorist attacks that occurred on November 2015 in Paris. Using annotated corpora, we first train a recurrent neural network model to classify expressions of solidarity in text. Next, we use these expressions of solidarity to characterize human behavior in online social networks, through the temporal and geospatial diffusion of emojis. Our analysis reveals that emojis are a powerful indicator of sociolinguistic behaviors (solidarity) that are exhibited on social media as the crisis events unfold.


  Access Paper or Ask Questions

The Unreasonable Effectiveness of Transformer Language Models in Grammatical Error Correction

Jun 04, 2019
Dimitrios Alikaniotis, Vipul Raheja

Recent work on Grammatical Error Correction (GEC) has highlighted the importance of language modeling in that it is certainly possible to achieve good performance by comparing the probabilities of the proposed edits. At the same time, advancements in language modeling have managed to generate linguistic output, which is almost indistinguishable from that of human-generated text. In this paper, we up the ante by exploring the potential of more sophisticated language models in GEC and offer some key insights on their strengths and weaknesses. We show that, in line with recent results in other NLP tasks, Transformer architectures achieve consistently high performance and provide a competitive baseline for future machine learning models.

* 7 pages, 3 tables, accepted at the 14th Workshop on Innovative Use of NLP for Building Educational Applications 

  Access Paper or Ask Questions

BERT Rediscovers the Classical NLP Pipeline

May 15, 2019
Ian Tenney, Dipanjan Das, Ellie Pavlick

Pre-trained text encoders have rapidly advanced the state of the art on many NLP tasks. We focus on one such model, BERT, and aim to quantify where linguistic information is captured within the network. We find that the model represents the steps of the traditional NLP pipeline in an interpretable and localizable way, and that the regions responsible for each step appear in the expected sequence: POS tagging, parsing, NER, semantic roles, then coreference. Qualitative analysis reveals that the model can and often does adjust this pipeline dynamically, revising lower-level decisions on the basis of disambiguating information from higher-level representations.

* Accepted to ACL 2019 

  Access Paper or Ask Questions

Scalable Cross-Lingual Transfer of Neural Sentence Embeddings

Apr 11, 2019
Hanan Aldarmaki, Mona Diab

We develop and investigate several cross-lingual alignment approaches for neural sentence embedding models, such as the supervised inference classifier, InferSent, and sequential encoder-decoder models. We evaluate three alignment frameworks applied to these models: joint modeling, representation transfer learning, and sentence mapping, using parallel text to guide the alignment. Our results support representation transfer as a scalable approach for modular cross-lingual alignment of neural sentence embeddings, where we observe better performance compared to joint models in intrinsic and extrinsic evaluations, particularly with smaller sets of parallel data.

* accepted in *SEM 2019 

  Access Paper or Ask Questions

Data Efficient Voice Cloning for Neural Singing Synthesis

Feb 19, 2019
Merlijn Blaauw, Jordi Bonada, Ryunosuke Daido

There are many use cases in singing synthesis where creating voices from small amounts of data is desirable. In text-to-speech there have been several promising results that apply voice cloning techniques to modern deep learning based models. In this work, we adapt one such technique to the case of singing synthesis. By leveraging data from many speakers to first create a multispeaker model, small amounts of target data can then efficiently adapt the model to new unseen voices. We evaluate the system using listening tests across a number of different use cases, languages and kinds of data.

* Accepted to ICASSP 2019 

  Access Paper or Ask Questions

Answer Interaction in Non-factoid Question Answering Systems

Jan 15, 2019
Chen Qu, Liu Yang, Bruce Croft, Falk Scholer, Yongfeng Zhang

Information retrieval systems are evolving from document retrieval to answer retrieval. Web search logs provide large amounts of data about how people interact with ranked lists of documents, but very little is known about interaction with answer texts. In this paper, we use Amazon Mechanical Turk to investigate three answer presentation and interaction approaches in a non-factoid question answering setting. We find that people perceive and react to good and bad answers very differently, and can identify good answers relatively quickly. Our results provide the basis for further investigation of effective answer interaction and feedback methods.

* Accepted to CHIIR 2019 

  Access Paper or Ask Questions

Generating Diverse and Meaningful Captions

Dec 19, 2018
Annika Lindh, Robert J. Ross, Abhijit Mahalunkar, Giancarlo Salton, John D. Kelleher

Image Captioning is a task that requires models to acquire a multi-modal understanding of the world and to express this understanding in natural language text. While the state-of-the-art for this task has rapidly improved in terms of n-gram metrics, these models tend to output the same generic captions for similar images. In this work, we address this limitation and train a model that generates more diverse and specific captions through an unsupervised training approach that incorporates a learning signal from an Image Retrieval model. We summarize previous results and improve the state-of-the-art on caption diversity and novelty. We make our source code publicly available online.

* Artificial Neural Networks and Machine Learning - ICANN 2018 (pp. 176-187). Springer International Publishing 
* Accepted for presentation at The 27th International Conference on Artificial Neural Networks (ICANN 2018) 

  Access Paper or Ask Questions

<<
953
954
955
956
957
958
959
960
961
962
963
964
965
>>