Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Visual Goal-Step Inference using wikiHow

Apr 12, 2021
Yue Yang, Artemis Panagopoulou, Qing Lyu, Li Zhang, Mark Yatskar, Chris Callison-Burch

Procedural events can often be thought of as a high level goal composed of a sequence of steps. Inferring the sub-sequence of steps of a goal can help artificial intelligence systems reason about human activities. Past work in NLP has examined the task of goal-step inference for text. We introduce the visual analogue. We propose the Visual Goal-Step Inference (VGSI) task where a model is given a textual goal and must choose a plausible step towards that goal from among four candidate images. Our task is challenging for state-of-the-art muitimodal models. We introduce a novel dataset harvested from wikiHow that consists of 772,294 images representing human actions. We show that the knowledge learned from our data can effectively transfer to other datasets like HowTo100M, increasing the multiple-choice accuracy by 15% to 20%. Our task will facilitate multi-modal reasoning about procedural events.


  Access Paper or Ask Questions

Taxonomic survey of Hindi Language NLP systems

Jan 30, 2021
Nikita P. Desai, Prof., Vipul K. Dabhi

Natural Language processing (NLP) represents the task of automatic handling of natural human language by machines.There is large spectrum of possible applications of NLP which help in automating tasks like translating text from one language to other, retrieving and summarizing data from very huge repositories, spam email filtering, identifying fake news in digital media, find sentiment and feedback of people, find political opinions and views of people on various government policies, provide effective medical assistance based on past history records of patient etc. Hindi is the official language of India with nearly 691 million users in India and 366 million in rest of world. At present, a number of government and private sector projects and researchers in India and abroad, are working towards developing NLP applications and resources for Indian languages. This survey gives a report of the resources and applications available for Hindi language NLP.


  Access Paper or Ask Questions

Neural Compound-Word (Sandhi) Generation and Splitting in Sanskrit Language

Oct 24, 2020
Sushant Dave, Arun Kumar Singh, Dr. Prathosh A. P., Prof. Brejesh Lall

This paper describes neural network based approaches to the process of the formation and splitting of word-compounding, respectively known as the Sandhi and Vichchhed, in Sanskrit language. Sandhi is an important idea essential to morphological analysis of Sanskrit texts. Sandhi leads to word transformations at word boundaries. The rules of Sandhi formation are well defined but complex, sometimes optional and in some cases, require knowledge about the nature of the words being compounded. Sandhi split or Vichchhed is an even more difficult task given its non uniqueness and context dependence. In this work, we propose the route of formulating the problem as a sequence to sequence prediction task, using modern deep learning techniques. Being the first fully data driven technique, we demonstrate that our model has an accuracy better than the existing methods on multiple standard datasets, despite not using any additional lexical or morphological resources. The code is being made available at https://github.com/IITD-DataScience/Sandhi_Prakarana

* 6 pages, 3 figures, CODS-COMAD 2021, IIIT Bangalore, India 

  Access Paper or Ask Questions

CDEvalSumm: An Empirical Study of Cross-Dataset Evaluation for Neural Summarization Systems

Oct 22, 2020
Yiran Chen, Pengfei Liu, Ming Zhong, Zi-Yi Dou, Danqing Wang, Xipeng Qiu, Xuanjing Huang

Neural network-based models augmented with unsupervised pre-trained knowledge have achieved impressive performance on text summarization. However, most existing evaluation methods are limited to an in-domain setting, where summarizers are trained and evaluated on the same dataset. We argue that this approach can narrow our understanding of the generalization ability for different summarization systems. In this paper, we perform an in-depth analysis of characteristics of different datasets and investigate the performance of different summarization models under a cross-dataset setting, in which a summarizer trained on one corpus will be evaluated on a range of out-of-domain corpora. A comprehensive study of 11 representative summarization systems on 5 datasets from different domains reveals the effect of model architectures and generation ways (i.e. abstractive and extractive) on model generalization ability. Further, experimental results shed light on the limitations of existing summarizers. Brief introduction and supplementary code can be found in https://github.com/zide05/CDEvalSumm.

* 13 pages, Findings of EMNLP2020 

  Access Paper or Ask Questions

Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data

Oct 22, 2020
Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang

Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization. To mitigate this issue, we propose a regularized fine-tuning method. Our method introduces two types of regularization for better calibration: (1) On-manifold regularization, which generates pseudo on-manifold samples through interpolation within the data manifold. Augmented training with these pseudo samples imposes a smoothness regularization to improve in-distribution calibration. (2) Off-manifold regularization, which encourages the model to output uniform distributions for pseudo off-manifold samples to address the over-confidence issue for OOD data. Our experiments demonstrate that the proposed method outperforms existing calibration methods for text classification in terms of expectation calibration error, misclassification detection, and OOD detection on six datasets. Our code can be found at https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning.

* EMNLP2020 long paper 

  Access Paper or Ask Questions

IndoNLU: Benchmark and Resources for Evaluating Indonesian Natural Language Understanding

Oct 08, 2020
Bryan Wilie, Karissa Vincentio, Genta Indra Winata, Samuel Cahyawijaya, Xiaohong Li, Zhi Yuan Lim, Sidik Soleman, Rahmad Mahendra, Pascale Fung, Syafri Bahar, Ayu Purwarianti

Although Indonesian is known to be the fourth most frequently used language over the internet, the research progress on this language in the natural language processing (NLP) is slow-moving due to a lack of available resources. In response, we introduce the first-ever vast resource for the training, evaluating, and benchmarking on Indonesian natural language understanding (IndoNLU) tasks. IndoNLU includes twelve tasks, ranging from single sentence classification to pair-sentences sequence labeling with different levels of complexity. The datasets for the tasks lie in different domains and styles to ensure task diversity. We also provide a set of Indonesian pre-trained models (IndoBERT) trained from a large and clean Indonesian dataset Indo4B collected from publicly available sources such as social media texts, blogs, news, and websites. We release baseline models for all twelve tasks, as well as the framework for benchmark evaluation, and thus it enables everyone to benchmark their system performances.

* This paper will be presented in AACL-IJCNLP 2020 (with new results and acknowledgment) 

  Access Paper or Ask Questions

Crosslingual Topic Modeling with WikiPDA

Sep 23, 2020
Tiziano Piccardi, Robert West

We present Wikipedia-based Polyglot Dirichlet Allocation (WikiPDA), a crosslingual topic model that learns to represent Wikipedia articles written in any language as distributions over a common set of language-independent topics. It leverages the fact that Wikipedia articles link to each other and are mapped to concepts in the Wikidata knowledge base, such that, when represented as bags of links, articles are inherently language-independent. WikiPDA works in two steps, by first densifying bags of links using matrix completion and then training a standard monolingual topic model. A human evaluation shows that WikiPDA produces more coherent topics than monolingual text-based LDA, thus offering crosslinguality at no cost. We demonstrate WikiPDA's utility in two applications: a study of topical biases in 28 Wikipedia editions, and crosslingual supervised classification. Finally, we highlight WikiPDA's capacity for zero-shot language transfer, where a model is reused for new languages without any fine-tuning.

* 10 pages, first version 

  Access Paper or Ask Questions

Attracting Sets in Perceptual Networks

Sep 17, 2020
Robert Prentner

This document gives a specification for the model used in [1]. It presents a simple way of optimizing mutual information between some input and the attractors of a (noisy) network, using a genetic algorithm. The nodes of this network are modeled as simplified versions of the structures described in the "interface theory of perception" [2]. Accordingly, the system is referred to as a "perceptual network". The present paper is an edited version of technical parts of [1] and serves as accompanying text for the Python implementation PerceptualNetworks, freely available under [3]. 1. Prentner, R., and Fields, C.. Using AI methods to Evaluate a Minimal Model for Perception. OpenPhilosophy 2019, 2, 503-524. 2. Hoffman, D. D., Prakash, C., and Singh, M.. The Interface Theory of Perception. Psychonomic Bulletin and Review 2015, 22, 1480-1506. 3. Prentner, R.. PerceptualNetworks. https://github.com/RobertPrentner/PerceptualNetworks. (accessed September 17 2020)


  Access Paper or Ask Questions

<<
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
>>