Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"Information Extraction": models, code, and papers

Probability Map Guided Bi-directional Recurrent UNet for Pancreas Segmentation

Apr 07, 2019
Jun Li, Xiaozhu Lin, Hui Che, Hao Li, Xiaohua Qian

Pancreatic cancer is one of the most lethal cancers as morbidity approximates mortality. A method for accurately segmenting the pancreas can assist doctors in the diagnosis and treatment of pancreatic cancer, but huge differences in shape and volume bring difficulties in segmentation. Among the current widely used approaches, the 2D method ignores the spatial information, and the 3D model is limited by high resource consumption and GPU memory occupancy. To address these issues, we propose a bi-directional recurrent UNet based on probabilistic map guidance (PBR-UNet). PBR-UNet includes a feature extraction module for extracting pixel-level probabilistic maps and a bi-directional recurrent module for fine segmentation. The extracted probabilistic map will be used to guide the fine segmentation and bi-directional recurrent module integrates contextual information into the entire network to avoid the loss of spatial information in propagation. By combining the probabilistic map of the adjacent slices with the bi-directional recurrent segmentation of intermediary slice, this paper solves the problem that the 2D network loses three-dimensional information and the 3D model leads to large computational resource consumption. We used Dice similarity coefficients (DSC) to evaluate our approach on NIH pancreatic datasets and eventually achieved a competitive result of 83.35%.

* Under review 
  
Access Paper or Ask Questions

CorefDRE: Document-level Relation Extraction with coreference resolution

Feb 22, 2022
Zhongxuan Xue, Rongzhen Li, Qizhu Dai, Zhong Jiang

Document-level relation extraction is to extract relation facts from a document consisting of multiple sentences, in which pronoun crossed sentences are a ubiquitous phenomenon against a single sentence. However, most of the previous works focus more on mentions coreference resolution except for pronouns, and rarely pay attention to mention-pronoun coreference and capturing the relations. To represent multi-sentence features by pronouns, we imitate the reading process of humans by leveraging coreference information when dynamically constructing a heterogeneous graph to enhance semantic information. Since the pronoun is notoriously ambiguous in the graph, a mention-pronoun coreference resolution is introduced to calculate the affinity between pronouns and corresponding mentions, and the noise suppression mechanism is proposed to reduce the noise caused by pronouns. Experiments on the public dataset, DocRED, DialogRE and MPDD, show that Coref-aware Doc-level Relation Extraction based on Graph Inference Network outperforms the state-of-the-art.

  
Access Paper or Ask Questions

Unsupervised Word and Dependency Path Embeddings for Aspect Term Extraction

May 25, 2016
Yichun Yin, Furu Wei, Li Dong, Kaimeng Xu, Ming Zhang, Ming Zhou

In this paper, we develop a novel approach to aspect term extraction based on unsupervised learning of distributed representations of words and dependency paths. The basic idea is to connect two words (w1 and w2) with the dependency path (r) between them in the embedding space. Specifically, our method optimizes the objective w1 + r = w2 in the low-dimensional space, where the multi-hop dependency paths are treated as a sequence of grammatical relations and modeled by a recurrent neural network. Then, we design the embedding features that consider linear context and dependency context information, for the conditional random field (CRF) based aspect term extraction. Experimental results on the SemEval datasets show that, (1) with only embedding features, we can achieve state-of-the-art results; (2) our embedding method which incorporates the syntactic information among words yields better performance than other representative ones in aspect term extraction.

* IJCAI 2016 
  
Access Paper or Ask Questions

CESI: Canonicalizing Open Knowledge Bases using Embeddings and Side Information

Feb 01, 2019
Shikhar Vashishth, Prince Jain, Partha Talukdar

Open Information Extraction (OpenIE) methods extract (noun phrase, relation phrase, noun phrase) triples from text, resulting in the construction of large Open Knowledge Bases (Open KBs). The noun phrases (NPs) and relation phrases in such Open KBs are not canonicalized, leading to the storage of redundant and ambiguous facts. Recent research has posed canonicalization of Open KBs as clustering over manuallydefined feature spaces. Manual feature engineering is expensive and often sub-optimal. In order to overcome this challenge, we propose Canonicalization using Embeddings and Side Information (CESI) - a novel approach which performs canonicalization over learned embeddings of Open KBs. CESI extends recent advances in KB embedding by incorporating relevant NP and relation phrase side information in a principled manner. Through extensive experiments on multiple real-world datasets, we demonstrate CESI's effectiveness.

* International World Wide Web Conferences Steering Committee 2018 
* Accepted at WWW 2018 
  
Access Paper or Ask Questions

BERE: An accurate distantly supervised biomedical entity relation extraction network

Jun 22, 2019
Lixiang Hong, JinJian Lin, Jiang Tao, Jianyang Zeng

Automated entity relation extraction (RE) from literature provides an important source for constructing biomedical database, which is more efficient and extensible than manual curation. However, existing RE models usually ignore the information contained in sentence structures and target entities. In this paper, we propose BERE, a deep learning based model which uses Gumbel Tree-GRU to learn sentence structures and joint embedding to incorporate entity information. It also employs word-level attention for improved relation extraction and sentence-level attention to suit the distantly supervised dataset. Because the existing dataset are relatively small, we further construct a much larger drug-target interaction extraction (DTIE) dataset by distant supervision. Experiments conducted on both DDIExtraction 2013 task and DTIE dataset show our model's effectiveness over state-of-the-art baselines in terms of F1 measures and PR curves.

* My tutor told me to withdraw this paper at once 
  
Access Paper or Ask Questions

Per-run Algorithm Selection with Warm-starting using Trajectory-based Features

Apr 20, 2022
Ana Kostovska, Anja Jankovic, Diederick Vermetten, Jacob de Nobel, Hao Wang, Tome Eftimov, Carola Doerr

Per-instance algorithm selection seeks to recommend, for a given problem instance and a given performance criterion, one or several suitable algorithms that are expected to perform well for the particular setting. The selection is classically done offline, using openly available information about the problem instance or features that are extracted from the instance during a dedicated feature extraction step. This ignores valuable information that the algorithms accumulate during the optimization process. In this work, we propose an alternative, online algorithm selection scheme which we coin per-run algorithm selection. In our approach, we start the optimization with a default algorithm, and, after a certain number of iterations, extract instance features from the observed trajectory of this initial optimizer to determine whether to switch to another optimizer. We test this approach using the CMA-ES as the default solver, and a portfolio of six different optimizers as potential algorithms to switch to. In contrast to other recent work on online per-run algorithm selection, we warm-start the second optimizer using information accumulated during the first optimization phase. We show that our approach outperforms static per-instance algorithm selection. We also compare two different feature extraction principles, based on exploratory landscape analysis and time series analysis of the internal state variables of the CMA-ES, respectively. We show that a combination of both feature sets provides the most accurate recommendations for our test cases, taken from the BBOB function suite from the COCO platform and the YABBOB suite from the Nevergrad platform.

  
Access Paper or Ask Questions

Graph Convolution over Pruned Dependency Trees Improves Relation Extraction

Sep 26, 2018
Yuhao Zhang, Peng Qi, Christopher D. Manning

Dependency trees help relation extraction models capture long-range relations between words. However, existing dependency-based models either neglect crucial information (e.g., negation) by pruning the dependency trees too aggressively, or are computationally inefficient because it is difficult to parallelize over different tree structures. We propose an extension of graph convolutional networks that is tailored for relation extraction, which pools information over arbitrary dependency structures efficiently in parallel. To incorporate relevant information while maximally removing irrelevant content, we further apply a novel pruning strategy to the input trees by keeping words immediately around the shortest path between the two entities among which a relation might hold. The resulting model achieves state-of-the-art performance on the large-scale TACRED dataset, outperforming existing sequence and dependency-based neural models. We also show through detailed analysis that this model has complementary strengths to sequence models, and combining them further improves the state of the art.

* EMNLP 2018. Code available at: https://github.com/qipeng/gcn-over-pruned-trees 
  
Access Paper or Ask Questions

Plot2Spectra: an Automatic Spectra Extraction Tool

Jul 06, 2021
Weixin Jiang, Eric Schwenker, Trevor Spreadbury, Kai Li, Maria K. Y. Chan, Oliver Cossairt

Different types of spectroscopies, such as X-ray absorption near edge structure (XANES) and Raman spectroscopy, play a very important role in analyzing the characteristics of different materials. In scientific literature, XANES/Raman data are usually plotted in line graphs which is a visually appropriate way to represent the information when the end-user is a human reader. However, such graphs are not conducive to direct programmatic analysis due to the lack of automatic tools. In this paper, we develop a plot digitizer, named Plot2Spectra, to extract data points from spectroscopy graph images in an automatic fashion, which makes it possible for large scale data acquisition and analysis. Specifically, the plot digitizer is a two-stage framework. In the first axis alignment stage, we adopt an anchor-free detector to detect the plot region and then refine the detected bounding boxes with an edge-based constraint to locate the position of two axes. We also apply scene text detector to extract and interpret all tick information below the x-axis. In the second plot data extraction stage, we first employ semantic segmentation to separate pixels belonging to plot lines from the background, and from there, incorporate optical flow constraints to the plot line pixels to assign them to the appropriate line (data instance) they encode. Extensive experiments are conducted to validate the effectiveness of the proposed plot digitizer, which shows that such a tool could help accelerate the discovery and machine learning of materials properties.

  
Access Paper or Ask Questions

DOM-LM: Learning Generalizable Representations for HTML Documents

Jan 25, 2022
Xiang Deng, Prashant Shiralkar, Colin Lockard, Binxuan Huang, Huan Sun

HTML documents are an important medium for disseminating information on the Web for human consumption. An HTML document presents information in multiple text formats including unstructured text, structured key-value pairs, and tables. Effective representation of these documents is essential for machine understanding to enable a wide range of applications, such as Question Answering, Web Search, and Personalization. Existing work has either represented these documents using visual features extracted by rendering them in a browser, which is typically computationally expensive, or has simply treated them as plain text documents, thereby failing to capture useful information presented in their HTML structure. We argue that the text and HTML structure together convey important semantics of the content and therefore warrant a special treatment for their representation learning. In this paper, we introduce a novel representation learning approach for web pages, dubbed DOM-LM, which addresses the limitations of existing approaches by encoding both text and DOM tree structure with a transformer-based encoder and learning generalizable representations for HTML documents via self-supervised pre-training. We evaluate DOM-LM on a variety of webpage understanding tasks, including Attribute Extraction, Open Information Extraction, and Question Answering. Our extensive experiments show that DOM-LM consistently outperforms all baselines designed for these tasks. In particular, DOM-LM demonstrates better generalization performance both in few-shot and zero-shot settings, making it attractive for making it suitable for real-world application settings with limited labeled data.

  
Access Paper or Ask Questions
<<
36
37
38
39
40
41
42
43
44
45
46
47
48
>>