Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Information Extraction": models, code, and papers

A Two-Stream AMR-enhanced Model for Document-level Event Argument Extraction

Apr 30, 2022
Runxin Xu, Peiyi Wang, Tianyu Liu, Shuang Zeng, Baobao Chang, Zhifang Sui

Most previous studies aim at extracting events from a single sentence, while document-level event extraction still remains under-explored. In this paper, we focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document. To address these issues, we propose a Two-Stream Abstract meaning Representation enhanced extraction model (TSAR). TSAR encodes the document from different perspectives by a two-stream encoding module, to utilize local and global information and lower the impact of distracting context. Besides, TSAR introduces an AMR-guided interaction module to capture both intra-sentential and inter-sentential features, based on the locally and globally constructed AMR semantic graphs. An auxiliary boundary loss is introduced to enhance the boundary information for text spans explicitly. Extensive experiments illustrate that TSAR outperforms previous state-of-the-art by a large margin, with 2.54 F1 and 5.13 F1 performance gain on the public RAMS and WikiEvents datasets respectively, showing the superiority in the cross-sentence arguments extraction. We release our code in https://github.com/ PKUnlp-icler/TSAR.

* Long paper in NAACL 2022 main conference 
  

Improving Open Information Extraction via Iterative Rank-Aware Learning

May 31, 2019
Zhengbao Jiang, Pengcheng Yin, Graham Neubig

Open information extraction (IE) is the task of extracting open-domain assertions from natural language sentences. A key step in open IE is confidence modeling, ranking the extractions based on their estimated quality to adjust precision and recall of extracted assertions. We found that the extraction likelihood, a confidence measure used by current supervised open IE systems, is not well calibrated when comparing the quality of assertions extracted from different sentences. We propose an additional binary classification loss to calibrate the likelihood to make it more globally comparable, and an iterative learning process, where extractions generated by the open IE model are incrementally included as training samples to help the model learn from trial and error. Experiments on OIE2016 demonstrate the effectiveness of our method. Code and data are available at https://github.com/jzbjyb/oie_rank.

* Proceedings of ACL 2019 
  

End-to-End Trainable One-Stage Parking Slot Detection Integrating Global and Local Information

Mar 05, 2020
Jae Kyu Suhr, Ho Gi Jung

This paper proposes an end-to-end trainable one-stage parking slot detection method for around view monitor (AVM) images. The proposed method simultaneously acquires global information (entrance, type, and occupancy of parking slot) and local information (location and orientation of junction) by using a convolutional neural network (CNN), and integrates them to detect parking slots with their properties. This method divides an AVM image into a grid and performs a CNN-based feature extraction. For each cell of the grid, the global and local information of the parking slot is obtained by applying convolution filters to the extracted feature map. Final detection results are produced by integrating the global and local information of the parking slot through non-maximum suppression (NMS). Since the proposed method obtains most of the information of the parking slot using a fully convolutional network without a region proposal stage, it is an end-to-end trainable one-stage detector. In experiments, this method was quantitatively evaluated using the public dataset and outperforms previous methods by showing both recall and precision of 99.77%, type classification accuracy of 100%, and occupancy classification accuracy of 99.31% while processing 60 frames per second.

  

A Generative Model for Relation Extraction and Classification

Feb 26, 2022
Jian Ni, Gaetano Rossiello, Alfio Gliozzo, Radu Florian

Relation extraction (RE) is an important information extraction task which provides essential information to many NLP applications such as knowledge base population and question answering. In this paper, we present a novel generative model for relation extraction and classification (which we call GREC), where RE is modeled as a sequence-to-sequence generation task. We explore various encoding representations for the source and target sequences, and design effective schemes that enable GREC to achieve state-of-the-art performance on three benchmark RE datasets. In addition, we introduce negative sampling and decoding scaling techniques which provide a flexible tool to tune the precision and recall performance of the model. Our approach can be extended to extract all relation triples from a sentence in one pass. Although the one-pass approach incurs certain performance loss, it is much more computationally efficient.

  

Design of Automatically Adaptable Web Wrappers

Mar 07, 2011
Emilio Ferrara, Robert Baumgartner

Nowadays, the huge amount of information distributed through the Web motivates studying techniques to be adopted in order to extract relevant data in an efficient and reliable way. Both academia and enterprises developed several approaches of Web data extraction, for example using techniques of artificial intelligence or machine learning. Some commonly adopted procedures, namely wrappers, ensure a high degree of precision of information extracted from Web pages, and, at the same time, have to prove robustness in order not to compromise quality and reliability of data themselves. In this paper we focus on some experimental aspects related to the robustness of the data extraction process and the possibility of automatically adapting wrappers. We discuss the implementation of algorithms for finding similarities between two different version of a Web page, in order to handle modifications, avoiding the failure of data extraction tasks and ensuring reliability of information extracted. Our purpose is to evaluate performances, advantages and draw-backs of our novel system of automatic wrapper adaptation.

* Proceedings of the 3rd International Conference on Agents and Artificial Intelligence, pp 211-216, 2011 
* 7 pages, 2 figures, In Proceedings of the 3rd International Conference on Agents and Artificial Intelligence (ICAART 2011) 
  

Representation Extraction and Deep Neural Recommendation for Collaborative Filtering

Dec 09, 2020
Arash Khoeini, Saman Haratizadeh, Ehsan Hoseinzade

Many Deep Learning approaches solve complicated classification and regression problems by hierarchically constructing complex features from the raw input data. Although a few works have investigated the application of deep neural networks in recommendation domain, they mostly extract entity features by exploiting unstructured auxiliary data such as visual and textual information, and when it comes to using user-item rating matrix, feature extraction is done by using matrix factorization. As matrix factorization has some limitations, some works have been done to replace it with deep neural network. but these works either need to exploit unstructured data such item's reviews or images, or are specially designed to use implicit data and don't take user-item rating matrix into account. In this paper, we investigate the usage of novel representation learning algorithms to extract users and items representations from rating matrix, and offer a deep neural network for Collaborative Filtering. Our proposed approach is a modular algorithm consisted of two main phases: REpresentation eXtraction and a deep neural NETwork (RexNet). Using two joint and parallel neural networks in RexNet enables it to extract a hierarchy of features for each entity in order to predict the degree of interest of users to items. The resulted predictions are then used for the final recommendation. Unlike other deep learning recommendation approaches, RexNet is not dependent to unstructured auxiliary data such as visual and textual information, instead, it uses only the user-item rate matrix as its input. We evaluated RexNet in an extensive set of experiments against state of the art recommendation methods. The results show that RexNet significantly outperforms the baseline algorithms in a variety of data sets with different degrees of density.

  

A Short Survey of Biomedical Relation Extraction Techniques

Jul 25, 2017
Elham Shahab

Biomedical information is growing rapidly in the recent years and retrieving useful data through information extraction system is getting more attention. In the current research, we focus on different aspects of relation extraction techniques in biomedical domain and briefly describe the state-of-the-art for relation extraction between a variety of biological elements.

* updated keywords and reference format 
  

TNNT: The Named Entity Recognition Toolkit

Aug 31, 2021
Sandaru Seneviratne, Sergio J. Rodríguez Méndez, Xuecheng Zhang, Pouya G. Omran, Kerry Taylor, Armin Haller

Extraction of categorised named entities from text is a complex task given the availability of a variety of Named Entity Recognition (NER) models and the unstructured information encoded in different source document formats. Processing the documents to extract text, identifying suitable NER models for a task, and obtaining statistical information is important in data analysis to make informed decisions. This paper presents TNNT, a toolkit that automates the extraction of categorised named entities from unstructured information encoded in source documents, using diverse state-of-the-art Natural Language Processing (NLP) tools and NER models. TNNT integrates 21 different NER models as part of a Knowledge Graph Construction Pipeline (KGCP) that takes a document set as input and processes it based on the defined settings, applying the selected blocks of NER models to output the results. The toolkit generates all results with an integrated summary of the extracted entities, enabling enhanced data analysis to support the KGCP, and also, to aid further NLP tasks.

* This demo paper will be submitted at K-Cap 2021 
  

Feature extraction using Latent Dirichlet Allocation and Neural Networks: A case study on movie synopses

Apr 05, 2016
Despoina Christou

Feature extraction has gained increasing attention in the field of machine learning, as in order to detect patterns, extract information, or predict future observations from big data, the urge of informative features is crucial. The process of extracting features is highly linked to dimensionality reduction as it implies the transformation of the data from a sparse high-dimensional space, to higher level meaningful abstractions. This dissertation employs Neural Networks for distributed paragraph representations, and Latent Dirichlet Allocation to capture higher level features of paragraph vectors. Although Neural Networks for distributed paragraph representations are considered the state of the art for extracting paragraph vectors, we show that a quick topic analysis model such as Latent Dirichlet Allocation can provide meaningful features too. We evaluate the two methods on the CMU Movie Summary Corpus, a collection of 25,203 movie plot summaries extracted from Wikipedia. Finally, for both approaches, we use K-Nearest Neighbors to discover similar movies, and plot the projected representations using T-Distributed Stochastic Neighbor Embedding to depict the context similarities. These similarities, expressed as movie distances, can be used for movies recommendation. The recommended movies of this approach are compared with the recommended movies from IMDB, which use a collaborative filtering recommendation approach, to show that our two models could constitute either an alternative or a supplementary recommendation approach.

  

Clinical Concept Extraction for Document-Level Coding

Jun 08, 2019
Sarah Wiegreffe, Edward Choi, Sherry Yan, Jimeng Sun, Jacob Eisenstein

The text of clinical notes can be a valuable source of patient information and clinical assessments. Historically, the primary approach for exploiting clinical notes has been information extraction: linking spans of text to concepts in a detailed domain ontology. However, recent work has demonstrated the potential of supervised machine learning to extract document-level codes directly from the raw text of clinical notes. We propose to bridge the gap between the two approaches with two novel syntheses: (1) treating extracted concepts as features, which are used to supplement or replace the text of the note; (2) treating extracted concepts as labels, which are used to learn a better representation of the text. Unfortunately, the resulting concepts do not yield performance gains on the document-level clinical coding task. We explore possible explanations and future research directions.

* ACL BioNLP workshop (2019) 
  
<<
25
26
27
28
29
30
31
32
33
34
35
36
37
>>