Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Information Extraction": models, code, and papers

Enhancing Drug-Drug Interaction Extraction from Texts by Molecular Structure Information

May 15, 2018
Masaki Asada, Makoto Miwa, Yutaka Sasaki

We propose a novel neural method to extract drug-drug interactions (DDIs) from texts using external drug molecular structure information. We encode textual drug pairs with convolutional neural networks and their molecular pairs with graph convolutional networks (GCNs), and then we concatenate the outputs of these two networks. In the experiments, we show that GCNs can predict DDIs from the molecular structures of drugs in high accuracy and the molecular information can enhance text-based DDI extraction by 2.39 percent points in the F-score on the DDIExtraction 2013 shared task data set.

* accepted as a short paper at ACL2018 
  

Towards Unsupervised Learning of Temporal Relations between Events

Jan 23, 2014
Seyed Abolghasem Mirroshandel, Gholamreza Ghassem-Sani

Automatic extraction of temporal relations between event pairs is an important task for several natural language processing applications such as Question Answering, Information Extraction, and Summarization. Since most existing methods are supervised and require large corpora, which for many languages do not exist, we have concentrated our efforts to reduce the need for annotated data as much as possible. This paper presents two different algorithms towards this goal. The first algorithm is a weakly supervised machine learning approach for classification of temporal relations between events. In the first stage, the algorithm learns a general classifier from an annotated corpus. Then, inspired by the hypothesis of "one type of temporal relation per discourse, it extracts useful information from a cluster of topically related documents. We show that by combining the global information of such a cluster with local decisions of a general classifier, a bootstrapping cross-document classifier can be built to extract temporal relations between events. Our experiments show that without any additional annotated data, the accuracy of the proposed algorithm is higher than that of several previous successful systems. The second proposed method for temporal relation extraction is based on the expectation maximization (EM) algorithm. Within EM, we used different techniques such as a greedy best-first search and integer linear programming for temporal inconsistency removal. We think that the experimental results of our EM based algorithm, as a first step toward a fully unsupervised temporal relation extraction method, is encouraging.

* Journal Of Artificial Intelligence Research, Volume 45, pages 125-163, 2012 
  

WebSets: Extracting Sets of Entities from the Web Using Unsupervised Information Extraction

Jul 01, 2013
Bhavana Dalvi, William W. Cohen, Jamie Callan

We describe a open-domain information extraction method for extracting concept-instance pairs from an HTML corpus. Most earlier approaches to this problem rely on combining clusters of distributionally similar terms and concept-instance pairs obtained with Hearst patterns. In contrast, our method relies on a novel approach for clustering terms found in HTML tables, and then assigning concept names to these clusters using Hearst patterns. The method can be efficiently applied to a large corpus, and experimental results on several datasets show that our method can accurately extract large numbers of concept-instance pairs.

* 10 pages; International Conference on Web Search and Data Mining 2012 
  

Understanding Spatial Language in Radiology: Representation Framework, Annotation, and Spatial Relation Extraction from Chest X-ray Reports using Deep Learning

Aug 13, 2019
Surabhi Datta, Yuqi Si, Laritza Rodriguez, Sonya E Shooshan, Dina Demner-Fushman, Kirk Roberts

We define a representation framework for extracting spatial information from radiology reports (Rad-SpRL). We annotated a total of 2000 chest X-ray reports with 4 spatial roles corresponding to the common radiology entities. Our focus is on extracting detailed information of a radiologist's interpretation containing a radiographic finding, its anatomical location, corresponding probable diagnoses, as well as associated hedging terms. For this, we propose a deep learning-based natural language processing (NLP) method involving both word and character-level encodings. Specifically, we utilize a bidirectional long short-term memory (Bi-LSTM) conditional random field (CRF) model for extracting the spatial roles. The model achieved average F1 measures of 90.28 and 94.61 for extracting the Trajector and Landmark roles respectively whereas the performance was moderate for Diagnosis and Hedge roles with average F1 of 71.47 and 73.27 respectively. The corpus will soon be made available upon request.

  

Gravitational Wave Detection and Information Extraction via Neural Networks

Mar 22, 2020
Gerson R. Santos, Marcela P. Figueiredo, Antonio de Pádua Santos, Pavlos Protopapas, Tiago A. E. Ferreira

Laser Interferometer Gravitational-Wave Observatory (LIGO) was the first laboratory to measure the gravitational waves. It was needed an exceptional experimental design to measure distance changes much less than a radius of a proton. In the same way, the data analyses to confirm and extract information is a tremendously hard task. Here, it is shown a computational procedure base on artificial neural networks to detect a gravitation wave event and extract the knowledge of its ring-down time from the LIGO data. With this proposal, it is possible to make a probabilistic thermometer for gravitational wave detection and obtain physical information about the astronomical body system that created the phenomenon. Here, the ring-down time is determined with a direct data measure, without the need to use numerical relativity techniques and high computational power.

  

Extraction of evidence tables from abstracts of randomized clinical trials using a maximum entropy classifier and global constraints

Sep 17, 2015
Antonio Trenta, Anthony Hunter, Sebastian Riedel

Systematic use of the published results of randomized clinical trials is increasingly important in evidence-based medicine. In order to collate and analyze the results from potentially numerous trials, evidence tables are used to represent trials concerning a set of interventions of interest. An evidence table has columns for the patient group, for each of the interventions being compared, for the criterion for the comparison (e.g. proportion who survived after 5 years from treatment), and for each of the results. Currently, it is a labour-intensive activity to read each published paper and extract the information for each field in an evidence table. There have been some NLP studies investigating how some of the features from papers can be extracted, or at least the relevant sentences identified. However, there is a lack of an NLP system for the systematic extraction of each item of information required for an evidence table. We address this need by a combination of a maximum entropy classifier, and integer linear programming. We use the later to handle constraints on what is an acceptable classification of the features to be extracted. With experimental results, we demonstrate substantial advantages in using global constraints (such as the features describing the patient group, and the interventions, must occur before the features describing the results of the comparison).

* 27 pages, 10 tables 
  

Dimensionality Reduction and Classification Feature Using Mutual Information Applied to Hyperspectral Images: A Wrapper Strategy Algorithm Based on Minimizing the Error Probability Using the Inequality of Fano

Oct 31, 2012
Elkebir Sarhrouni, Ahmed Hammouch, Driss Aboutajdine

In the feature classification domain, the choice of data affects widely the results. For the Hyperspectral image, the bands dont all contain the information; some bands are irrelevant like those affected by various atmospheric effects, see Figure.4, and decrease the classification accuracy. And there exist redundant bands to complicate the learning system and product incorrect prediction [14]. Even the bands contain enough information about the scene they may can't predict the classes correctly if the dimension of space images, see Figure.3, is so large that needs many cases to detect the relationship between the bands and the scene (Hughes phenomenon) [10]. We can reduce the dimensionality of hyperspectral images by selecting only the relevant bands (feature selection or subset selection methodology), or extracting, from the original bands, new bands containing the maximal information about the classes, using any functions, logical or numerical (feature extraction methodology) [11][9]. Here we focus on the feature selection using mutual information. Hyperspectral images have three advantages regarding the multispectral images [6],

* Applied Mathematical Sciences, Vol. 6, 2012, no. 102, 5073 - 5084 
* 12 page, 5 figures. arXiv admin note: substantial text overlap with arXiv:1210.0528, arXiv:1210.0052 
  

Uncovering Main Causalities for Long-tailed Information Extraction

Sep 11, 2021
Guoshun Nan, Jiaqi Zeng, Rui Qiao, Zhijiang Guo, Wei Lu

Information Extraction (IE) aims to extract structural information from unstructured texts. In practice, long-tailed distributions caused by the selection bias of a dataset, may lead to incorrect correlations, also known as spurious correlations, between entities and labels in the conventional likelihood models. This motivates us to propose counterfactual IE (CFIE), a novel framework that aims to uncover the main causalities behind data in the view of causal inference. Specifically, 1) we first introduce a unified structural causal model (SCM) for various IE tasks, describing the relationships among variables; 2) with our SCM, we then generate counterfactuals based on an explicit language structure to better calculate the direct causal effect during the inference stage; 3) we further propose a novel debiasing approach to yield more robust predictions. Experiments on three IE tasks across five public datasets show the effectiveness of our CFIE model in mitigating the spurious correlation issues.

* Accepted as a long paper in the main conference of EMNLP 2021 
  

Automatic Taxonomy Extraction from Query Logs with no Additional Sources of Information

Oct 05, 2015
Miguel Fernandez-Fernandez, Daniel Gayo-Avello

Search engine logs store detailed information on Web users interactions. Thus, as more and more people use search engines on a daily basis, important trails of users common knowledge are being recorded in those files. Previous research has shown that it is possible to extract concept taxonomies from full text documents, while other scholars have proposed methods to obtain similar queries from query logs. We propose a mixture of both lines of research, that is, mining query logs not to find related queries nor query hierarchies, but actual term taxonomies that could be used to improve search engine effectiveness and efficiency. As a result, in this study we have developed a method that combines lexical heuristics with a supervised classification model to successfully extract hyponymy relations from specialization search patterns revealed from log missions, with no additional sources of information, and in a language independent way.

* 21 pages, 4 figures, 5 tables. Old (2012) unpublished manuscript 
  

Unified Structure Generation for Universal Information Extraction

Mar 23, 2022
Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, Hua Wu

Information extraction suffers from its varying targets, heterogeneous structures, and demand-specific schemas. In this paper, we propose a unified text-to-structure generation framework, namely UIE, which can universally model different IE tasks, adaptively generate targeted structures, and collaboratively learn general IE abilities from different knowledge sources. Specifically, UIE uniformly encodes different extraction structures via a structured extraction language, adaptively generates target extractions via a schema-based prompt mechanism - structural schema instructor, and captures the common IE abilities via a large-scale pre-trained text-to-structure model. Experiments show that UIE achieved the state-of-the-art performance on 4 IE tasks, 13 datasets, and on all supervised, low-resource, and few-shot settings for a wide range of entity, relation, event and sentiment extraction tasks and their unification. These results verified the effectiveness, universality, and transferability of UIE.

* Accepted to the main conference of ACL2022 
  
<<
30
31
32
33
34
35
36
37
38
39
40
41
42
>>