Get our free extension to see links to code for papers anywhere online!

Chrome logo Add to Chrome

Firefox logo Add to Firefox

"Text": models, code, and papers

Probing and Fine-tuning Reading Comprehension Models for Few-shot Event Extraction

Oct 21, 2020
Rui Feng, Jie Yuan, Chao Zhang

We study the problem of event extraction from text data, which requires both detecting target event types and their arguments. Typically, both the event detection and argument detection subtasks are formulated as supervised sequence labeling problems. We argue that the event extraction models so trained are inherently label-hungry, and can generalize poorly across domains and text genres.We propose a reading comprehension framework for event extraction.Specifically, we formulate event detection as a textual entailment prediction problem, and argument detection as a question answer-ing problem. By constructing proper query templates, our approach can effectively distill rich knowledge about tasks and label semantics from pretrained reading comprehension models. Moreover, our model can be fine-tuned with a small amount of data to boost its performance. Our experiment results show that our method performs strongly for zero-shot and few-shot event extraction, and it achieves state-of-the-art performance on the ACE 2005 benchmark when trained with full supervision.


  Access Paper or Ask Questions

Graph Wasserstein Correlation Analysis for Movie Retrieval

Aug 06, 2020
Xueya Zhang, Tong Zhang, Xiaobin Hong, Zhen Cui, Jian Yang

Movie graphs play an important role to bridge heterogenous modalities of videos and texts in human-centric retrieval. In this work, we propose Graph Wasserstein Correlation Analysis (GWCA) to deal with the core issue therein, i.e, cross heterogeneous graph comparison. Spectral graph filtering is introduced to encode graph signals, which are then embedded as probability distributions in a Wasserstein space, called graph Wasserstein metric learning. Such a seamless integration of graph signal filtering together with metric learning results in a surprise consistency on both learning processes, in which the goal of metric learning is just to optimize signal filters or vice versa. Further, we derive the solution of the graph comparison model as a classic generalized eigenvalue decomposition problem, which has an exactly closed-form solution. Finally, GWCA together with movie/text graphs generation are unified into the framework of movie retrieval to evaluate our proposed method. Extensive experiments on MovieGrpahs dataset demonstrate the effectiveness of our GWCA as well as the entire framework.


  Access Paper or Ask Questions

Semi-supervised Embedding Learning for High-dimensional Bayesian Optimization

May 29, 2020
Jingfan Chen, Guanghui Zhu, Rong Gu, Chunfeng Yuan, Yihua Huang

Bayesian optimization is a broadly applied methodology to optimize the expensive black-box function. Despite its success, it still faces the challenge from the high-dimensional search space. To alleviate this problem, we propose a novel Bayesian optimization framework (termed SILBO), which finds a low-dimensional space to perform Bayesian optimization iteratively through semi-supervised dimension reduction. SILBO incorporates both labeled points and unlabeled points acquired from the acquisition function to guide the embedding space learning. To accelerate the learning procedure, we present a randomized method for generating the projection matrix. Furthermore, to map from the low-dimensional space to the high-dimensional original space, we propose two mapping strategies: $\text{SILBO}_{FZ}$ and $\text{SILBO}_{FX}$ according to the evaluation overhead of the objective function. Experimental results on both synthetic function and hyperparameter optimization tasks demonstrate that SILBO outperforms the existing state-of-the-art high-dimensional Bayesian optimization methods.


  Access Paper or Ask Questions

Robust Layout-aware IE for Visually Rich Documents with Pre-trained Language Models

May 22, 2020
Mengxi Wei, Yifan He, Qiong Zhang

Many business documents processed in modern NLP and IR pipelines are visually rich: in addition to text, their semantics can also be captured by visual traits such as layout, format, and fonts. We study the problem of information extraction from visually rich documents (VRDs) and present a model that combines the power of large pre-trained language models and graph neural networks to efficiently encode both textual and visual information in business documents. We further introduce new fine-tuning objectives to improve in-domain unsupervised fine-tuning to better utilize large amount of unlabeled in-domain data. We experiment on real world invoice and resume data sets and show that the proposed method outperforms strong text-based RoBERTa baselines by 6.3% absolute F1 on invoices and 4.7% absolute F1 on resumes. When evaluated in a few-shot setting, our method requires up to 30x less annotation data than the baseline to achieve the same level of performance at ~90% F1.

* 10 pages, to appear in SIGIR 2020 Industry Track 

  Access Paper or Ask Questions

A pre-training technique to localize medical BERT and enhance BioBERT

May 14, 2020
Shoya Wada, Toshihiro Takeda, Shiro Manabe, Shozo Konishi, Jun Kamohara, Yasushi Matsumura

Bidirectional Encoder Representations from Transformers (BERT) models for biomedical specialties such as BioBERT and clinicalBERT have significantly improved in biomedical text-mining tasks and enabled us to extract valuable information from biomedical literature. However, we benefitted only in English because of the significant scarcity of high-quality medical documents, such as PubMed, in each language. Therefore, we propose a method that realizes a high-performance BERT model by using a small corpus. We introduce the method to train a BERT model on a small medical corpus both in English and Japanese, respectively, and then we evaluate each of them in terms of the biomedical language understanding evaluation (BLUE) benchmark and the medical-document-classification task in Japanese, respectively. After confirming their satisfactory performances, we apply our method to develop a model that outperforms the pre-existing models. Bidirectional Encoder Representations from Transformers for Biomedical Text Mining by Osaka University (ouBioBERT) achieves the best scores on 7 of the 10 datasets in terms of the BLUE benchmark. The total score is 1.0 points above that of BioBERT.

* We made the pre-trained weights of ouBioBERT and the source code for fine-tuning freely available at https://github.com/sy-wada/blue_benchmark_with_transformers 

  Access Paper or Ask Questions

Improving N-gram Language Models with Pre-trained Deep Transformer

Nov 22, 2019
Yiren Wang, Hongzhao Huang, Zhe Liu, Yutong Pang, Yongqiang Wang, ChengXiang Zhai, Fuchun Peng

Although n-gram language models (LMs) have been outperformed by the state-of-the-art neural LMs, they are still widely used in speech recognition due to its high efficiency in inference. In this paper, we demonstrate that n-gram LM can be improved by neural LMs through a text generation based data augmentation method. In contrast to previous approaches, we employ a large-scale general domain pre-training followed by in-domain fine-tuning strategy to construct deep Transformer based neural LMs. Large amount of in-domain text data is generated with the well trained deep Transformer to construct new n-gram LMs, which are then interpolated with baseline n-gram systems. Empirical studies on different speech recognition tasks show that the proposed approach can effectively improve recognition accuracy. In particular, our proposed approach brings significant relative word error rate reduction up to 6.0% for domains with limited in-domain data.


  Access Paper or Ask Questions

Predicting the Leading Political Ideology of YouTube Channels Using Acoustic, Textual, and Metadata Information

Oct 20, 2019
Yoan Dinkov, Ahmed Ali, Ivan Koychev, Preslav Nakov

We address the problem of predicting the leading political ideology, i.e., left-center-right bias, for YouTube channels of news media. Previous work on the problem has focused exclusively on text and on analysis of the language used, topics discussed, sentiment, and the like. In contrast, here we study videos, which yields an interesting multimodal setup. Starting with gold annotations about the leading political ideology of major world news media from Media Bias/Fact Check, we searched on YouTube to find their corresponding channels, and we downloaded a recent sample of videos from each channel. We crawled more than 1,000 YouTube hours along with the corresponding subtitles and metadata, thus producing a new multimodal dataset. We further developed a multimodal deep-learning architecture for the task. Our analysis shows that the use of acoustic signal helped to improve bias detection by more than 6% absolute over using text and metadata only. We release the dataset to the research community, hoping to help advance the field of multi-modal political bias detection.

* INTERSPEECH-2019 
* media bias, political ideology, Youtube channels, propaganda, disinformation, fake news 

  Access Paper or Ask Questions

Barack's Wife Hillary: Using Knowledge-Graphs for Fact-Aware Language Modeling

Jun 20, 2019
Robert L. Logan IV, Nelson F. Liu, Matthew E. Peters, Matt Gardner, Sameer Singh

Modeling human language requires the ability to not only generate fluent text but also encode factual knowledge. However, traditional language models are only capable of remembering facts seen at training time, and often have difficulty recalling them. To address this, we introduce the knowledge graph language model (KGLM), a neural language model with mechanisms for selecting and copying facts from a knowledge graph that are relevant to the context. These mechanisms enable the model to render information it has never seen before, as well as generate out-of-vocabulary tokens. We also introduce the Linked WikiText-2 dataset, a corpus of annotated text aligned to the Wikidata knowledge graph whose contents (roughly) match the popular WikiText-2 benchmark. In experiments, we demonstrate that the KGLM achieves significantly better performance than a strong baseline language model. We additionally compare different language model's ability to complete sentences requiring factual knowledge, showing that the KGLM outperforms even very large language models in generating facts.


  Access Paper or Ask Questions

Conversing by Reading: Contentful Neural Conversation with On-demand Machine Reading

Jun 07, 2019
Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, Jianfeng Gao

Although neural conversation models are effective in learning how to produce fluent responses, their primary challenge lies in knowing what to say to make the conversation contentful and non-vacuous. We present a new end-to-end approach to contentful neural conversation that jointly models response generation and on-demand machine reading. The key idea is to provide the conversation model with relevant long-form text on the fly as a source of external knowledge. The model performs QA-style reading comprehension on this text in response to each conversational turn, thereby allowing for more focused integration of external knowledge than has been possible in prior approaches. To support further research on knowledge-grounded conversation, we introduce a new large-scale conversation dataset grounded in external web pages (2.8M turns, 7.4M sentences of grounding). Both human evaluation and automated metrics show that our approach results in more contentful responses compared to a variety of previous methods, improving both the informativeness and diversity of generated output.

* ACL 2019 long paper 

  Access Paper or Ask Questions

<<
647
648
649
650
651
652
653
654
655
656
657
658
659
>>