Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Integrated Sequence Tagging for Medieval Latin Using Deep Representation Learning

Aug 03, 2017
Mike Kestemont, Jeroen De Gussem

In this paper we consider two sequence tagging tasks for medieval Latin: part-of-speech tagging and lemmatization. These are both basic, yet foundational preprocessing steps in applications such as text re-use detection. Nevertheless, they are generally complicated by the considerable orthographic variation which is typical of medieval Latin. In Digital Classics, these tasks are traditionally solved in a (i) cascaded and (ii) lexicon-dependent fashion. For example, a lexicon is used to generate all the potential lemma-tag pairs for a token, and next, a context-aware PoS-tagger is used to select the most appropriate tag-lemma pair. Apart from the problems with out-of-lexicon items, error percolation is a major downside of such approaches. In this paper we explore the possibility to elegantly solve these tasks using a single, integrated approach. For this, we make use of a layered neural network architecture from the field of deep representation learning.

* Journal of Data Mining & Digital Humanities, Special Issue on Computer-Aided Processing of Intertextuality in Ancient Languages, Towards a Digital Ecosystem: NLP. Corpus infrastructure. Methods for Retrieving Texts and Computing Text Similarities (August 6, 2017) jdmdh:3835 

  Access Paper or Ask Questions

Unified Embedding and Metric Learning for Zero-Exemplar Event Detection

May 05, 2017
Noureldien Hussein, Efstratios Gavves, Arnold W. M. Smeulders

Event detection in unconstrained videos is conceived as a content-based video retrieval with two modalities: textual and visual. Given a text describing a novel event, the goal is to rank related videos accordingly. This task is zero-exemplar, no video examples are given to the novel event. Related works train a bank of concept detectors on external data sources. These detectors predict confidence scores for test videos, which are ranked and retrieved accordingly. In contrast, we learn a joint space in which the visual and textual representations are embedded. The space casts a novel event as a probability of pre-defined events. Also, it learns to measure the distance between an event and its related videos. Our model is trained end-to-end on publicly available EventNet. When applied to TRECVID Multimedia Event Detection dataset, it outperforms the state-of-the-art by a considerable margin.

* IEEE CVPR 2017 

  Access Paper or Ask Questions

Sampling Variations of Lead Sheets

Mar 02, 2017
Pierre Roy, Alexandre Papadopoulos, François Pachet

Machine-learning techniques have been recently used with spectacular results to generate artefacts such as music or text. However, these techniques are still unable to capture and generate artefacts that are convincingly structured. In this paper we present an approach to generate structured musical sequences. We introduce a mechanism for sampling efficiently variations of musical sequences. Given a input sequence and a statistical model, this mechanism samples a set of sequences whose distance to the input sequence is approximately within specified bounds. This mechanism is implemented as an extension of belief propagation, and uses local fields to bias the generation. We show experimentally that sampled sequences are indeed closely correlated to the standard musical similarity measure defined by Mongeau and Sankoff. We then show how this mechanism can used to implement composition strategies that enforce arbitrary structure on a musical lead sheet generation problem.

* 16 pages, 11 figures 

  Access Paper or Ask Questions

Long-Term Trends in the Public Perception of Artificial Intelligence

Dec 02, 2016
Ethan Fast, Eric Horvitz

Analyses of text corpora over time can reveal trends in beliefs, interest, and sentiment about a topic. We focus on views expressed about artificial intelligence (AI) in the New York Times over a 30-year period. General interest, awareness, and discussion about AI has waxed and waned since the field was founded in 1956. We present a set of measures that captures levels of engagement, measures of pessimism and optimism, the prevalence of specific hopes and concerns, and topics that are linked to discussions about AI over decades. We find that discussion of AI has increased sharply since 2009, and that these discussions have been consistently more optimistic than pessimistic. However, when we examine specific concerns, we find that worries of loss of control of AI, ethical concerns for AI, and the negative impact of AI on work have grown in recent years. We also find that hopes for AI in healthcare and education have increased over time.

* In AAAI 2017 

  Access Paper or Ask Questions

Open-Ended Visual Question-Answering

Oct 09, 2016
Issey Masuda, Santiago Pascual de la Puente, Xavier Giro-i-Nieto

This thesis report studies methods to solve Visual Question-Answering (VQA) tasks with a Deep Learning framework. As a preliminary step, we explore Long Short-Term Memory (LSTM) networks used in Natural Language Processing (NLP) to tackle Question-Answering (text based). We then modify the previous model to accept an image as an input in addition to the question. For this purpose, we explore the VGG-16 and K-CNN convolutional neural networks to extract visual features from the image. These are merged with the word embedding or with a sentence embedding of the question to predict the answer. This work was successfully submitted to the Visual Question Answering Challenge 2016, where it achieved a 53,62% of accuracy in the test dataset. The developed software has followed the best programming practices and Python code style, providing a consistent baseline in Keras for different configurations.

* Bachelor thesis report graded with A with honours at ETSETB Telecom BCN school, Universitat Polit\`ecnica de Catalunya (UPC). June 2016. Source code and models are publicly available at http://imatge-upc.github.io/vqa-2016-cvprw/ 

  Access Paper or Ask Questions

Zipf's law emerges asymptotically during phase transitions in communicative systems

Apr 01, 2016
Bohdan B. Khomtchouk, Claes Wahlestedt

Zipf's law predicts a power-law relationship between word rank and frequency in language communication systems, and is widely reported in texts yet remains enigmatic as to its origins. Computer simulations have shown that language communication systems emerge at an abrupt phase transition in the fidelity of mappings between symbols and objects. Since the phase transition approximates the Heaviside or step function, we show that Zipfian scaling emerges asymptotically at high rank based on the Laplace transform. We thereby demonstrate that Zipf's law gradually emerges from the moment of phase transition in communicative systems. We show that this power-law scaling behavior explains the emergence of natural languages at phase transitions. We find that the emergence of Zipf's law during language communication suggests that the use of rare words in a lexicon is critical for the construction of an effective communicative system at the phase transition.

* 6 pages, 3 figures 

  Access Paper or Ask Questions

Combining Neural Networks and Log-linear Models to Improve Relation Extraction

Nov 18, 2015
Thien Huu Nguyen, Ralph Grishman

The last decade has witnessed the success of the traditional feature-based method on exploiting the discrete structures such as words or lexical patterns to extract relations from text. Recently, convolutional and recurrent neural networks has provided very effective mechanisms to capture the hidden structures within sentences via continuous representations, thereby significantly advancing the performance of relation extraction. The advantage of convolutional neural networks is their capacity to generalize the consecutive k-grams in the sentences while recurrent neural networks are effective to encode long ranges of sentence context. This paper proposes to combine the traditional feature-based method, the convolutional and recurrent neural networks to simultaneously benefit from their advantages. Our systematic evaluation of different network architectures and combination methods demonstrates the effectiveness of this approach and results in the state-of-the-art performance on the ACE 2005 and SemEval dataset.


  Access Paper or Ask Questions

A Mood-based Genre Classification of Television Content

Aug 06, 2015
Humberto Corona, Michael P. O'Mahony

The classification of television content helps users organise and navigate through the large list of channels and programs now available. In this paper, we address the problem of television content classification by exploiting text information extracted from program transcriptions. We present an analysis which adapts a model for sentiment that has been widely and successfully applied in other fields such as music or blog posts. We use a real-world dataset obtained from the Boxfish API to compare the performance of classifiers trained on a number of different feature sets. Our experiments show that, over a large collection of television content, program genres can be represented in a three-dimensional space of valence, arousal and dominance, and that promising classification results can be achieved using features based on this representation. This finding supports the use of the proposed representation of television content as a feature space for similarity computation and recommendation generation.

* in ACM Workshop on Recommendation Systems for Television and Online Video 2014 Foster City, California USA 

  Access Paper or Ask Questions

Long Short-Term Memory Over Tree Structures

Mar 16, 2015
Xiaodan Zhu, Parinaz Sobhani, Hongyu Guo

The chain-structured long short-term memory (LSTM) has showed to be effective in a wide range of problems such as speech recognition and machine translation. In this paper, we propose to extend it to tree structures, in which a memory cell can reflect the history memories of multiple child cells or multiple descendant cells in a recursive process. We call the model S-LSTM, which provides a principled way of considering long-distance interaction over hierarchies, e.g., language or image parse structures. We leverage the models for semantic composition to understand the meaning of text, a fundamental problem in natural language understanding, and show that it outperforms a state-of-the-art recursive model by replacing its composition layers with the S-LSTM memory blocks. We also show that utilizing the given structures is helpful in achieving a performance better than that without considering the structures.

* On February 6th, 2015, this work was submitted to the International Conference on Machine Learning (ICML) 

  Access Paper or Ask Questions

How Many Topics? Stability Analysis for Topic Models

Jun 19, 2014
Derek Greene, Derek O'Callaghan, Pádraig Cunningham

Topic modeling refers to the task of discovering the underlying thematic structure in a text corpus, where the output is commonly presented as a report of the top terms appearing in each topic. Despite the diversity of topic modeling algorithms that have been proposed, a common challenge in successfully applying these techniques is the selection of an appropriate number of topics for a given corpus. Choosing too few topics will produce results that are overly broad, while choosing too many will result in the "over-clustering" of a corpus into many small, highly-similar topics. In this paper, we propose a term-centric stability analysis strategy to address this issue, the idea being that a model with an appropriate number of topics will be more robust to perturbations in the data. Using a topic modeling approach based on matrix factorization, evaluations performed on a range of corpora show that this strategy can successfully guide the model selection process.

* Improve readability of plots. Add minor clarifications 

  Access Paper or Ask Questions

<<
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
>>