Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Topic": models, code, and papers

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Jan 01, 2021
Xiang Lisa Li, Percy Liang

Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.


  Access Paper or Ask Questions

On Calibration of Scene-Text Recognition Models

Dec 23, 2020
Ron Slossberg, Oron Anschel, Amir Markovitz, Ron Litman, Aviad Aberdam, Shahar Tsiper, Shai Mazor, Jon Wu, R. Manmatha

In this work, we study the problem of word-level confidence calibration for scene-text recognition (STR). Although the topic of confidence calibration has been an active research area for the last several decades, the case of structured and sequence prediction calibration has been scarcely explored. We analyze several recent STR methods and show that they are consistently overconfident. We then focus on the calibration of STR models on the word rather than the character level. In particular, we demonstrate that for attention based decoders, calibration of individual character predictions increases word-level calibration error compared to an uncalibrated model. In addition, we apply existing calibration methodologies as well as new sequence-based extensions to numerous STR models, demonstrating reduced calibration error by up to a factor of nearly 7. Finally, we show consistently improved accuracy results by applying our proposed sequence calibration method as a preprocessing step to beam-search.


  Access Paper or Ask Questions

Ontology-based and User-focused Automatic Text Summarization (OATS): Using COVID-19 Risk Factors as an Example

Nov 18, 2020
Po-Hsu Allen Chen, Amy Leibrand, Jordan Vasko, Mitch Gauthier

This paper proposes a novel Ontology-based and user-focused Automatic Text Summarization (OATS) system, in the setting where the goal is to automatically generate text summarization from unstructured text by extracting sentences containing the information that aligns to the user's focus. OATS consists of two modules: ontology-based topic identification and user-focused text summarization; it first utilizes an ontology-based approach to identify relevant documents to user's interest, and then takes advantage of the answers extracted from a question answering model using questions specified from users for the generation of text summarization. To support the fight against the COVID-19 pandemic, we used COVID-19 risk factors as an example to demonstrate the proposed OATS system with the aim of helping the medical community accurately identify relevant scientific literature and efficiently review the information that addresses risk factors related to COVID-19.


  Access Paper or Ask Questions

Neural Generation Meets Real People: Towards Emotionally Engaging Mixed-Initiative Conversations

Sep 05, 2020
Ashwin Paranjape, Abigail See, Kathleen Kenealy, Haojun Li, Amelia Hardy, Peng Qi, Kaushik Ram Sadagopan, Nguyet Minh Phu, Dilara Soylu, Christopher D. Manning

We present Chirpy Cardinal, an open-domain dialogue agent, as a research platform for the 2019 Alexa Prize competition. Building an open-domain socialbot that talks to real people is challenging - such a system must meet multiple user expectations such as broad world knowledge, conversational style, and emotional connection. Our socialbot engages users on their terms - prioritizing their interests, feelings and autonomy. As a result, our socialbot provides a responsive, personalized user experience, capable of talking knowledgeably about a wide variety of topics, as well as chatting empathetically about ordinary life. Neural generation plays a key role in achieving these goals, providing the backbone for our conversational and emotional tone. At the end of the competition, Chirpy Cardinal progressed to the finals with an average rating of 3.6/5.0, a median conversation duration of 2 minutes 16 seconds, and a 90th percentile duration of over 12 minutes.

* Published in 3rd Proceedings of Alexa Prize (Alexa Prize 2019) 

  Access Paper or Ask Questions

A Survey of Behavior Trees in Robotics and AI

May 13, 2020
Matteo Iovino, Edvards Scukins, Jonathan Styrud, Petter Ögren, Christian Smith

Behavior Trees (BTs) were invented as a tool to enable modular AI in computer games, but have received an increasing amount of attention in the robotics community in the last decade. With rising demands on agent AI complexity, game programmers found that the Finite State Machines (FSM) that they used scaled poorly and were difficult to extend, adapt and reuse. In BTs, the state transition logic is not dispersed across the individual states, but organized in a hierarchical tree structure, with the states as leaves. This has a significant effect on modularity, which in turn simplifies both synthesis and analysis by humans and algorithms alike. These advantages are needed not only in game AI design, but also in robotics, as is evident from the research being done. In this paper we present a comprehensive survey of the topic of BTs in Artificial Intelligence and Robotic applications. The existing literature is described and categorized based on methods, application areas and contributions, and the paper is concluded with a list of open research challenges.


  Access Paper or Ask Questions

Language Models as an Alternative Evaluator of Word Order Hypotheses: A Case Study in Japanese

May 02, 2020
Tatsuki Kuribayashi, Takumi Ito, Jun Suzuki, Kentaro Inui

We examine a methodology using neural language models (LMs) for analyzing the word order of language. This LM-based method has the potential to overcome the difficulties existing methods face, such as the propagation of preprocessor errors in count-based methods. In this study, we explore whether the LM-based method is valid for analyzing the word order. As a case study, this study focuses on Japanese due to its complex and flexible word order. To validate the LM-based method, we test (i) parallels between LMs and human word order preference, and (ii) consistency of the results obtained using the LM-based method with previous linguistic studies. Through our experiments, we tentatively conclude that LMs display sufficient word order knowledge for usage as an analysis tool. Finally, using the LM-based method, we demonstrate the relationship between the canonical word order and topicalization, which had yet to be analyzed by large-scale experiments.

* Accepted by ACL2020 

  Access Paper or Ask Questions

Unsupervised Keyphrase Rubric Relationship Classification in Complex Assignments

Apr 06, 2020
Manikandan Ravikiran

Complex assignments are open-ended question with varying content irrespective of diversity of course and mode of communication. With sheer scale comes issue of reviews that are incomplete and lack details leading to high regrading requests. As such to automatically relate the contents of assignments to scoring rubric, in this work we present a very first work on keyphrase-rubric relationship classification i.e. we will try to relate the contents to rubrics by solving it as classification problem. In this study, we analyze both supervised and unsupervised methods to find that supervised approaches outperform unsupervised approaches and topic modelling approaches, despite data limitation with supervised approaches producing maximum results of 0.48 F1-Score and unsupervised approach producing best result of 0.31 F1-Score. We further present exhaustive experimentation and cluster analysis using multiple metrics identifying cases where the unsupervised and supervised methods are usable.

* v1 preprint. Working paper. More results to be added. arXiv admin note: substantial text overlap with arXiv:2003.07019 

  Access Paper or Ask Questions

Counterfactual fairness: removing direct effects through regularization

Feb 25, 2020
Pietro G. Di Stefano, James M. Hickey, Vlasios Vasileiou

Building machine learning models that are \textit{fair} with respect to an unprivileged group is a topical problem. Modern fairness-aware algorithms often ignore causal effects and enforce fairness through modifications applicable to only a subset of machine learning models. In this work, we propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE). We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition by removing the impact of unprivileged group variables on the model outcomes as measured by the CDE. These regularizations are applicable to any model trained using by iteratively minimizing a loss through differentiation. We demonstrate our approaches using both gradient boosting and logistic regression on: a synthetic dataset, the UCI Adult (Census) Dataset, and a real-world credit-risk dataset. Our results were found to mitigate unfairness from the predictions with small reductions in model performance.

* 10 pages, 4 figures 

  Access Paper or Ask Questions

Two Huge Title and Keyword Generation Corpora of Research Articles

Feb 11, 2020
Erion Çano, Ondřej Bojar

Recent developments in sequence-to-sequence learning with neural networks have considerably improved the quality of automatically generated text summaries and document keywords, stipulating the need for even bigger training corpora. Metadata of research articles are usually easy to find online and can be used to perform research on various tasks. In this paper, we introduce two huge datasets for text summarization (OAGSX) and keyword generation (OAGKX) research, containing 34 million and 23 million records, respectively. The data were retrieved from the Open Academic Graph which is a network of research profiles and publications. We carefully processed each record and also tried several extractive and abstractive methods of both tasks to create performance baselines for other researchers. We further illustrate the performance of those methods previewing their outputs. In the near future, we would like to apply topic modeling on the two sets to derive subsets of research articles from more specific disciplines.

* 9 pages, 8 tables. Published in proceedings of LREC 2020, the 12th International Conference on Language Resources and Evaluation, Marseille, France 

  Access Paper or Ask Questions

<<
313
314
315
316
317
318
319
320
321
322
323
324
325
>>