Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Machine Assisted Analysis of Vowel Length Contrasts in Wolof

Jun 01, 2017
Elodie Gauthier, Laurent Besacier, Sylvie Voisin

Growing digital archives and improving algorithms for automatic analysis of text and speech create new research opportunities for fundamental research in phonetics. Such empirical approaches allow statistical evaluation of a much larger set of hypothesis about phonetic variation and its conditioning factors (among them geographical / dialectal variants). This paper illustrates this vision and proposes to challenge automatic methods for the analysis of a not easily observable phenomenon: vowel length contrast. We focus on Wolof, an under-resourced language from Sub-Saharan Africa. In particular, we propose multiple features to make a fine evaluation of the degree of length contrast under different factors such as: read vs semi spontaneous speech ; standard vs dialectal Wolof. Our measures made fully automatically on more than 20k vowel tokens show that our proposed features can highlight different degrees of contrast for each vowel considered. We notably show that contrast is weaker in semi-spontaneous speech and in a non standard semi-spontaneous dialect.

* Accepted to Interspeech 2017 

  Access Paper or Ask Questions

Leveraging Large Amounts of Weakly Supervised Data for Multi-Language Sentiment Classification

Mar 07, 2017
Jan Deriu, Aurelien Lucchi, Valeria De Luca, Aliaksei Severyn, Simon Müller, Mark Cieliebak, Thomas Hofmann, Martin Jaggi

This paper presents a novel approach for multi-lingual sentiment classification in short texts. This is a challenging task as the amount of training data in languages other than English is very limited. Previously proposed multi-lingual approaches typically require to establish a correspondence to English for which powerful classifiers are already available. In contrast, our method does not require such supervision. We leverage large amounts of weakly-supervised data in various languages to train a multi-layer convolutional network and demonstrate the importance of using pre-training of such networks. We thoroughly evaluate our approach on various multi-lingual datasets, including the recent SemEval-2016 sentiment prediction benchmark (Task 4), where we achieved state-of-the-art performance. We also compare the performance of our model trained individually for each language to a variant trained for all languages at once. We show that the latter model reaches slightly worse - but still acceptable - performance when compared to the single language model, while benefiting from better generalization properties across languages.

* appearing at WWW 2017 - 26th International World Wide Web Conference 

  Access Paper or Ask Questions

Learning to detect and localize many objects from few examples

Nov 17, 2016
Bastien Moysset, Christoper Kermorvant, Christian Wolf

The current trend in object detection and localization is to learn predictions with high capacity deep neural networks trained on a very large amount of annotated data and using a high amount of processing power. In this work, we propose a new neural model which directly predicts bounding box coordinates. The particularity of our contribution lies in the local computations of predictions with a new form of local parameter sharing which keeps the overall amount of trainable parameters low. Key components of the model are spatial 2D-LSTM recurrent layers which convey contextual information between the regions of the image. We show that this model is more powerful than the state of the art in applications where training data is not as abundant as in the classical configuration of natural images and Imagenet/Pascal VOC tasks. We particularly target the detection of text in document images, but our method is not limited to this setting. The proposed model also facilitates the detection of many objects in a single image and can deal with inputs of variable sizes without resizing.


  Access Paper or Ask Questions

Multi-Label Classification Method Based on Extreme Learning Machines

Aug 30, 2016
Rajasekar Venkatesan, Meng Joo Er

In this paper, an Extreme Learning Machine (ELM) based technique for Multi-label classification problems is proposed and discussed. In multi-label classification, each of the input data samples belongs to one or more than one class labels. The traditional binary and multi-class classification problems are the subset of the multi-label problem with the number of labels corresponding to each sample limited to one. The proposed ELM based multi-label classification technique is evaluated with six different benchmark multi-label datasets from different domains such as multimedia, text and biology. A detailed comparison of the results is made by comparing the proposed method with the results from nine state of the arts techniques for five different evaluation metrics. The nine methods are chosen from different categories of multi-label methods. The comparative results shows that the proposed Extreme Learning Machine based multi-label classification technique is a better alternative than the existing state of the art methods for multi-label problems.

* 6 pages, 7 figures, 7 tables, ICARCV 

  Access Paper or Ask Questions

Harmonization of conflicting medical opinions using argumentation protocols and textual entailment - a case study on Parkinson disease

Jul 27, 2016
Adrian Groza, Madalina Mand Nagy

Parkinson's disease is the second most common neurodegenerative disease, affecting more than 1.2 million people in Europe. Medications are available for the management of its symptoms, but the exact cause of the disease is unknown and there is currently no cure on the market. To better understand the relations between new findings and current medical knowledge, we need tools able to analyse published medical papers based on natural language processing and tools capable to identify various relationships of new findings with the current medical knowledge. Our work aims to fill the above technological gap. To identify conflicting information in medical documents, we enact textual entailment technology. To encapsulate existing medical knowledge, we rely on ontologies. To connect the formal axioms in ontologies with natural text in medical articles, we exploit ontology verbalisation techniques. To assess the level of disagreement between human agents with respect to a medical issue, we rely on fuzzy aggregation. To harmonize this disagreement, we design mediation protocols within a multi-agent framework.

* ICCP 2016 

  Access Paper or Ask Questions

Joint Line Segmentation and Transcription for End-to-End Handwritten Paragraph Recognition

Apr 28, 2016
Théodore Bluche

Offline handwriting recognition systems require cropped text line images for both training and recognition. On the one hand, the annotation of position and transcript at line level is costly to obtain. On the other hand, automatic line segmentation algorithms are prone to errors, compromising the subsequent recognition. In this paper, we propose a modification of the popular and efficient multi-dimensional long short-term memory recurrent neural networks (MDLSTM-RNNs) to enable end-to-end processing of handwritten paragraphs. More particularly, we replace the collapse layer transforming the two-dimensional representation into a sequence of predictions by a recurrent version which can recognize one line at a time. In the proposed model, a neural network performs a kind of implicit line segmentation by computing attention weights on the image representation. The experiments on paragraphs of Rimes and IAM database yield results that are competitive with those of networks trained at line level, and constitute a significant step towards end-to-end transcription of full documents.


  Access Paper or Ask Questions

Neural Language Correction with Character-Based Attention

Mar 31, 2016
Ziang Xie, Anand Avati, Naveen Arivazhagan, Dan Jurafsky, Andrew Y. Ng

Natural language correction has the potential to help language learners improve their writing skills. While approaches with separate classifiers for different error types have high precision, they do not flexibly handle errors such as redundancy or non-idiomatic phrasing. On the other hand, word and phrase-based machine translation methods are not designed to cope with orthographic errors, and have recently been outpaced by neural models. Motivated by these issues, we present a neural network-based approach to language correction. The core component of our method is an encoder-decoder recurrent neural network with an attention mechanism. By operating at the character level, the network avoids the problem of out-of-vocabulary words. We illustrate the flexibility of our approach on dataset of noisy, user-generated text collected from an English learner forum. When combined with a language model, our method achieves a state-of-the-art $F_{0.5}$-score on the CoNLL 2014 Shared Task. We further demonstrate that training the network on additional data with synthesized errors can improve performance.

* 10 pages 

  Access Paper or Ask Questions

Keeping it Short and Simple: Summarising Complex Event Sequences with Multivariate Patterns

Feb 10, 2016
Roel Bertens, Jilles Vreeken, Arno Siebes

We study how to obtain concise descriptions of discrete multivariate sequential data. In particular, how to do so in terms of rich multivariate sequential patterns that can capture potentially highly interesting (cor)relations between sequences. To this end we allow our pattern language to span over the domains (alphabets) of all sequences, allow patterns to overlap temporally, as well as allow for gaps in their occurrences. We formalise our goal by the Minimum Description Length principle, by which our objective is to discover the set of patterns that provides the most succinct description of the data. To discover high-quality pattern sets directly from data, we introduce DITTO, a highly efficient algorithm that approximates the ideal result very well. Experiments show that DITTO correctly discovers the patterns planted in synthetic data. Moreover, it scales favourably with the length of the data, the number of attributes, the alphabet sizes. On real data, ranging from sensor networks to annotated text, DITTO discovers easily interpretable summaries that provide clear insight in both the univariate and multivariate structure.


  Access Paper or Ask Questions

First-Pass Large Vocabulary Continuous Speech Recognition using Bi-Directional Recurrent DNNs

Dec 08, 2014
Awni Y. Hannun, Andrew L. Maas, Daniel Jurafsky, Andrew Y. Ng

We present a method to perform first-pass large vocabulary continuous speech recognition using only a neural network and language model. Deep neural network acoustic models are now commonplace in HMM-based speech recognition systems, but building such systems is a complex, domain-specific task. Recent work demonstrated the feasibility of discarding the HMM sequence modeling framework by directly predicting transcript text from audio. This paper extends this approach in two ways. First, we demonstrate that a straightforward recurrent neural network architecture can achieve a high level of accuracy. Second, we propose and evaluate a modified prefix-search decoding algorithm. This approach to decoding enables first-pass speech recognition with a language model, completely unaided by the cumbersome infrastructure of HMM-based systems. Experiments on the Wall Street Journal corpus demonstrate fairly competitive word error rates, and the importance of bi-directional network recurrence.


  Access Paper or Ask Questions

Toward Selectivity Based Keyword Extraction for Croatian News

Jul 17, 2014
Slobodan Beliga, Ana Meštrović, Sanda Martinčić-Ipšić

Preliminary report on network based keyword extraction for Croatian is an unsupervised method for keyword extraction from the complex network. We build our approach with a new network measure the node selectivity, motivated by the research of the graph based centrality approaches. The node selectivity is defined as the average weight distribution on the links of the single node. We extract nodes (keyword candidates) based on the selectivity value. Furthermore, we expand extracted nodes to word-tuples ranked with the highest in/out selectivity values. Selectivity based extraction does not require linguistic knowledge while it is purely derived from statistical and structural information en-compassed in the source text which is reflected into the structure of the network. Obtained sets are evaluated on a manually annotated keywords: for the set of extracted keyword candidates average F1 score is 24,63%, and average F2 score is 21,19%; for the exacted words-tuples candidates average F1 score is 25,9% and average F2 score is 24,47%.


  Access Paper or Ask Questions

<<
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
>>