Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Information Extraction": models, code, and papers

A logic-based relational learning approach to relation extraction: The OntoILPER system

Jan 13, 2020
Rinaldo Lima, Bernard Espinasse, Fred Freitas

Relation Extraction (RE), the task of detecting and characterizing semantic relations between entities in text, has gained much importance in the last two decades, mainly in the biomedical domain. Many papers have been published on Relation Extraction using supervised machine learning techniques. Most of these techniques rely on statistical methods, such as feature-based and tree-kernels-based methods. Such statistical learning techniques are usually based on a propositional hypothesis space for representing examples, i.e., they employ an attribute-value representation of features. This kind of representation has some drawbacks, particularly in the extraction of complex relations which demand more contextual information about the involving instances, i.e., it is not able to effectively capture structural information from parse trees without loss of information. In this work, we present OntoILPER, a logic-based relational learning approach to Relation Extraction that uses Inductive Logic Programming for generating extraction models in the form of symbolic extraction rules. OntoILPER takes profit of a rich relational representation of examples, which can alleviate the aforementioned drawbacks. The proposed relational approach seems to be more suitable for Relation Extraction than statistical ones for several reasons that we argue. Moreover, OntoILPER uses a domain ontology that guides the background knowledge generation process and is used for storing the extracted relation instances. The induced extraction rules were evaluated on three protein-protein interaction datasets from the biomedical domain. The performance of OntoILPER extraction models was compared with other state-of-the-art RE systems. The encouraging results seem to demonstrate the effectiveness of the proposed solution.

* Engineering Applications of Artificial Intelligence, Elsevier, 2019, 78, pp.142-157 
  
Access Paper or Ask Questions

Temporal Relation Extraction with a Graph-Based Deep Biaffine Attention Model

Jan 16, 2022
Bo-Ying Su, Shang-Ling Hsu, Kuan-Yin Lai, Amarnath Gupta

Temporal information extraction plays a critical role in natural language understanding. Previous systems have incorporated advanced neural language models and have successfully enhanced the accuracy of temporal information extraction tasks. However, these systems have two major shortcomings. First, they fail to make use of the two-sided nature of temporal relations in prediction. Second, they involve non-parallelizable pipelines in inference process that bring little performance gain. To this end, we propose a novel temporal information extraction model based on deep biaffine attention to extract temporal relationships between events in unstructured text efficiently and accurately. Our model is performant because we perform relation extraction tasks directly instead of considering event annotation as a prerequisite of relation extraction. Moreover, our architecture uses Multilayer Perceptrons (MLP) with biaffine attention to predict arcs and relation labels separately, improving relation detecting accuracy by exploiting the two-sided nature of temporal relationships. We experimentally demonstrate that our model achieves state-of-the-art performance in temporal relation extraction.

  
Access Paper or Ask Questions

New Insights on Target Speaker Extraction

Feb 01, 2022
Mohamed Elminshawi, Wolfgang Mack, Soumitro Chakrabarty, Emanuël A. P. Habets

In recent years, researchers have become increasingly interested in speaker extraction (SE), which is the task of extracting the speech of a target speaker from a mixture of interfering speakers with the help of auxiliary information about the target speaker. Several forms of auxiliary information have been employed in single-channel SE, such as a speech snippet enrolled from the target speaker or visual information corresponding to the spoken utterance. Many SE studies have reported performance improvement compared to speaker separation (SS) methods with oracle selection, arguing that this is due to the use of auxiliary information. However, such works have not considered state-of-the-art SS methods that have shown impressive separation performance. In this paper, we revise and examine the role of the auxiliary information in SE. Specifically, we compare the performance of two SE systems (audio-based and video-based) with SS using a common framework that utilizes the state-of-the-art dual-path recurrent neural network as the main learning machine. In addition, we study how much the considered SE systems rely on the auxiliary information by analyzing the systems' output for random auxiliary signals. Experimental evaluation on various datasets suggests that the main purpose of the auxiliary information in the considered SE systems is only to specify the target speaker in the mixture and that it does not provide consistent extraction performance gain when compared to the uninformed SS system.

  
Access Paper or Ask Questions

Neural News Recommendation with Event Extraction

Nov 09, 2021
Songqiao Han, Hailiang Huang, Jiangwei Liu

A key challenge of online news recommendation is to help users find articles they are interested in. Traditional news recommendation methods usually use single news information, which is insufficient to encode news and user representation. Recent research uses multiple channel news information, e.g., title, category, and body, to enhance news and user representation. However, these methods only use various attention mechanisms to fuse multi-view embeddings without considering deep digging higher-level information contained in the context. These methods encode news content on the word level and jointly train the attention parameters in the recommendation network, leading to more corpora being required to train the model. We propose an Event Extraction-based News Recommendation (EENR) framework to overcome these shortcomings, utilizing event extraction to abstract higher-level information. EENR also uses a two-stage strategy to reduce parameters in subsequent parts of the recommendation network. We train the Event Extraction module by external corpora in the first stage and apply the trained model to the news recommendation dataset to predict event-level information, including event types, roles, and arguments, in the second stage. Then we fuse multiple channel information, including event information, news title, and category, to encode news and users. Extensive experiments on a real-world dataset show that our EENR method can effectively improve the performance of news recommendations. Finally, we also explore the reasonability of utilizing higher abstract level information to substitute news body content.

* 11 pages, 4 figures, 2 tables 
  
Access Paper or Ask Questions

Outfit Generation and Style Extraction via Bidirectional LSTM and Autoencoder

Oct 23, 2018
Takuma Nakamura, Ryosuke Goto

When creating an outfit, style is a criterion in selecting each fashion item. This means that style can be regarded as a feature of the overall outfit. However, in various previous studies on outfit generation, there have been few methods focusing on global information obtained from an outfit. To address this deficiency, we have incorporated an unsupervised style extraction module into a model to learn outfits. Using the style information of an outfit as a whole, the proposed model succeeded in generating outfits more flexibly without requiring additional information. Moreover, the style information extracted by the proposed model is easy to interpret. The proposed model was evaluated on two human-generated outfit datasets. In a fashion item prediction task (missing prediction task), the proposed model outperformed a baseline method. In a style extraction task, the proposed model extracted some easily distinguishable styles. In an outfit generation task, the proposed model generated an outfit while controlling its styles. This capability allows us to generate fashionable outfits according to various preferences.

* 9 pages, 5 figures, KDD Workshop AI for fashion 
  
Access Paper or Ask Questions

Attention Guided Graph Convolutional Networks for Relation Extraction

Aug 09, 2019
Zhijiang Guo, Yan Zhang, Wei Lu

Dependency trees convey rich structural information that is proven useful for extracting relations among entities in text. However, how to effectively make use of relevant information while ignoring irrelevant information from the dependency trees remains a challenging research question. Existing approaches employing rule based hard-pruning strategies for selecting relevant partial dependency structures may not always yield optimal results. In this work, we propose Attention Guided Graph Convolutional Networks (AGGCNs), a novel model which directly takes full dependency trees as inputs. Our model can be understood as a soft-pruning approach that automatically learns how to selectively attend to the relevant sub-structures useful for the relation extraction task. Extensive results on various tasks including cross-sentence n-ary relation extraction and large-scale sentence-level relation extraction show that our model is able to better leverage the structural information of the full dependency trees, giving significantly better results than previous approaches.

* Accepted to ACL 2019, 11 pages, 4 figures, 5 tables 
  
Access Paper or Ask Questions

Possibilistic Pertinence Feedback and Semantic Networks for Goal's Extraction

Jun 05, 2012
Mohamed Nazih Omri

Pertinence Feedback is a technique that enables a user to interactively express his information requirement by modifying his original query formulation with further information. This information is provided by explicitly confirming the pertinent of some indicating objects and/or goals extracted by the system. Obviously the user cannot mark objects and/or goals as pertinent until some are extracted, so the first search has to be initiated by a query and the initial query specification has to be good enough to pick out some pertinent objects and/or goals from the Semantic Network. In this paper we present a short survey of fuzzy and Semantic approaches to Knowledge Extraction. The goal of such approaches is to define flexible Knowledge Extraction Systems able to deal with the inherent vagueness and uncertainty of the Extraction process. It has long been recognised that interactivity improves the effectiveness of Knowledge Extraction systems. Novice user's queries are the most natural and interactive medium of communication and recent progress in recognition is making it possible to build systems that interact with the user. However, given the typical novice user's queries submitted to Knowledge Extraction Systems, it is easy to imagine that the effects of goal recognition errors in novice user's queries must be severely destructive on the system's effectiveness. The experimental work reported in this paper shows that the use of possibility theory in classical Knowledge Extraction techniques for novice user's query processing is more robust than the use of the probability theory. Moreover, both possibilistic and probabilistic pertinence feedback can be effectively employed to improve the effectiveness of novice user's query processing.

* Asian Journal of Information Technology (4):258-265 - 2004 
  
Access Paper or Ask Questions

Probabilistic Coreference in Information Extraction

Jun 10, 1997
Andrew Kehler

Certain applications require that the output of an information extraction system be probabilistic, so that a downstream system can reliably fuse the output with possibly contradictory information from other sources. In this paper we consider the problem of assigning a probability distribution to alternative sets of coreference relationships among entity descriptions. We present the results of initial experiments with several approaches to estimating such distributions in an application using SRI's FASTUS information extraction system.

* Proceedings of the Second Conference on Empirical Methods in NLP (EMNLP-2), August 1-2, 1997, Providence, RI 
* LaTeX, 11 pages, requires aclap.sty 
  
Access Paper or Ask Questions

DBpedia NIF: Open, Large-Scale and Multilingual Knowledge Extraction Corpus

Dec 26, 2018
Milan Dojchinovski, Julio Hernandez, Markus Ackermann, Amit Kirschenbaum, Sebastian Hellmann

In the past decade, the DBpedia community has put significant amount of effort on developing technical infrastructure and methods for efficient extraction of structured information from Wikipedia. These efforts have been primarily focused on harvesting, refinement and publishing semi-structured information found in Wikipedia articles, such as information from infoboxes, categorization information, images, wikilinks and citations. Nevertheless, still vast amount of valuable information is contained in the unstructured Wikipedia article texts. In this paper, we present DBpedia NIF - a large-scale and multilingual knowledge extraction corpus. The aim of the dataset is two-fold: to dramatically broaden and deepen the amount of structured information in DBpedia, and to provide large-scale and multilingual language resource for development of various NLP and IR task. The dataset provides the content of all articles for 128 Wikipedia languages. We describe the dataset creation process and the NLP Interchange Format (NIF) used to model the content, links and the structure the information of the Wikipedia articles. The dataset has been further enriched with about 25% more links and selected partitions published as Linked Data. Finally, we describe the maintenance and sustainability plans, and selected use cases of the dataset from the TextExt knowledge extraction challenge.

* 15 pages, 1 figure, 4 tables, 1 listing 
  
Access Paper or Ask Questions
<<
6
7
8
9
10
11
12
13
14
15
16
17
18
>>