Alert button

"Information Extraction": models, code, and papers
Alert button

An Information Extraction Study: Take In Mind the Tokenization!

Mar 27, 2023
Christos Theodoropoulos, Marie-Francine Moens

Current research on the advantages and trade-offs of using characters, instead of tokenized text, as input for deep learning models, has evolved substantially. New token-free models remove the traditional tokenization step; however, their efficiency remains unclear. Moreover, the effect of tokenization is relatively unexplored in sequence tagging tasks. To this end, we investigate the impact of tokenization when extracting information from documents and present a comparative study and analysis of subword-based and character-based models. Specifically, we study Information Extraction (IE) from biomedical texts. The main outcome is twofold: tokenization patterns can introduce inductive bias that results in state-of-the-art performance, and the character-based models produce promising results; thus, transitioning to token-free IE models is feasible.

* presented at EUSFLAT 2023 

Contextualized Medication Information Extraction Using Transformer-based Deep Learning Architectures

Mar 14, 2023
Aokun Chen, Zehao Yu, Xi Yang, Yi Guo, Jiang Bian, Yonghui Wu

Figure 1 for Contextualized Medication Information Extraction Using Transformer-based Deep Learning Architectures
Figure 2 for Contextualized Medication Information Extraction Using Transformer-based Deep Learning Architectures
Figure 3 for Contextualized Medication Information Extraction Using Transformer-based Deep Learning Architectures
Figure 4 for Contextualized Medication Information Extraction Using Transformer-based Deep Learning Architectures

Objective: To develop a natural language processing (NLP) system to extract medications and contextual information that help understand drug changes. This project is part of the 2022 n2c2 challenge. Materials and methods: We developed NLP systems for medication mention extraction, event classification (indicating medication changes discussed or not), and context classification to classify medication changes context into 5 orthogonal dimensions related to drug changes. We explored 6 state-of-the-art pretrained transformer models for the three subtasks, including GatorTron, a large language model pretrained using >90 billion words of text (including >80 billion words from >290 million clinical notes identified at the University of Florida Health). We evaluated our NLP systems using annotated data and evaluation scripts provided by the 2022 n2c2 organizers. Results:Our GatorTron models achieved the best F1-scores of 0.9828 for medication extraction (ranked 3rd), 0.9379 for event classification (ranked 2nd), and the best micro-average accuracy of 0.9126 for context classification. GatorTron outperformed existing transformer models pretrained using smaller general English text and clinical text corpora, indicating the advantage of large language models. Conclusion: This study demonstrated the advantage of using large transformer models for contextual medication information extraction from clinical narratives.

HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction

Mar 14, 2023
Jie Zhou, Xianshuai Cao, Wenhao Li, Lin Bo, Kun Zhang, Chuan Luo, Qian Yu

Figure 1 for HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction
Figure 2 for HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction
Figure 3 for HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction
Figure 4 for HiNet: Novel Multi-Scenario & Multi-Task Learning with Hierarchical Information Extraction

Multi-scenario & multi-task learning has been widely applied to many recommendation systems in industrial applications, wherein an effective and practical approach is to carry out multi-scenario transfer learning on the basis of the Mixture-of-Expert (MoE) architecture. However, the MoE-based method, which aims to project all information in the same feature space, cannot effectively deal with the complex relationships inherent among various scenarios and tasks, resulting in unsatisfactory performance. To tackle the problem, we propose a Hierarchical information extraction Network (HiNet) for multi-scenario and multi-task recommendation, which achieves hierarchical extraction based on coarse-to-fine knowledge transfer scheme. The multiple extraction layers of the hierarchical network enable the model to enhance the capability of transferring valuable information across scenarios while preserving specific features of scenarios and tasks. Furthermore, a novel scenario-aware attentive network module is proposed to model correlations between scenarios explicitly. Comprehensive experiments conducted on real-world industrial datasets from Meituan Meishi platform demonstrate that HiNet achieves a new state-of-the-art performance and significantly outperforms existing solutions. HiNet is currently fully deployed in two scenarios and has achieved 2.87% and 1.75% order quantity gain respectively.

ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction

Mar 09, 2023
Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, Heng Tao Shen

Figure 1 for ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction
Figure 2 for ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction
Figure 3 for ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction
Figure 4 for ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction

Large language models (LLMs), such as GPT-3 and ChatGPT, have demonstrated remarkable results in various natural language processing (NLP) tasks with in-context learning, which involves inference based on a few demonstration examples. Despite their successes in NLP tasks, no investigation has been conducted to assess the ability of LLMs to perform document information extraction (DIE) using in-context learning. Applying LLMs to DIE poses two challenges: the modality and task gap. To this end, we propose a simple but effective in-context learning framework called ICL-D3IE, which enables LLMs to perform DIE with different types of demonstration examples. Specifically, we extract the most difficult and distinct segments from hard training documents as hard demonstrations for benefiting all test instances. We design demonstrations describing relationships that enable LLMs to understand positional relationships. We introduce formatting demonstrations for easy answer extraction. Additionally, the framework improves diverse demonstrations by updating them iteratively. Our experiments on three widely used benchmark datasets demonstrate that the ICL-D3IE framework enables GPT-3/ChatGPT to achieve superior performance when compared to previous pre-trained methods fine-tuned with full training in both the in-distribution (ID) setting and in the out-of-distribution (OOD) setting.

Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction

Mar 07, 2023
Martin Josifoski, Marija Sakota, Maxime Peyrard, Robert West

Figure 1 for Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction
Figure 2 for Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction
Figure 3 for Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction
Figure 4 for Exploiting Asymmetry for Synthetic Training Data Generation: SynthIE and the Case of Information Extraction

Large language models (LLMs) show great potential for synthetic data generation. This work shows that useful data can be synthetically generated even for tasks that cannot be solved directly by the LLM: we show that, for problems with structured outputs, it is possible to prompt an LLM to perform the task in the opposite direction, to generate plausible text for the target structure. Leveraging the asymmetry in task difficulty makes it possible to produce large-scale, high-quality data for complex tasks. We demonstrate the effectiveness of this approach on closed information extraction, where collecting ground-truth data is challenging, and no satisfactory dataset exists to date. We synthetically generate a dataset of 1.8M data points, demonstrate its superior quality compared to existing datasets in a human evaluation and use it to finetune small models (220M and 770M parameters). The models we introduce, SynthIE, outperform existing baselines of comparable size with a substantial gap of 57 and 79 absolute points in micro and macro F1, respectively. Code, data, and models are available at https://github.com/epfl-dlab/SynthIE.

A novel Multi to Single Module for small object detection

Mar 27, 2023
Xiaohui Guo

Small object detection presents a significant challenge in computer vision and object detection. The performance of small object detectors is often compromised by a lack of pixels and less significant features. This issue stems from information misalignment caused by variations in feature scale and information loss during feature processing. In response to this challenge, this paper proposes a novel the Multi to Single Module (M2S), which enhances a specific layer through improving feature extraction and refining features. Specifically, M2S includes the proposed Cross-scale Aggregation Module (CAM) and explored Dual Relationship Module (DRM) to improve information extraction capabilities and feature refinement effects. Moreover, this paper enhances the accuracy of small object detection by utilizing M2S to generate an additional detection head. The effectiveness of the proposed method is evaluated on two datasets, VisDrone2021-DET and SeaDronesSeeV2. The experimental results demonstrate its improved performance compared with existing methods. Compared to the baseline model (YOLOv5s), M2S improves the accuracy by about 1.1\% on the VisDrone2021-DET testing dataset and 15.68\% on the SeaDronesSeeV2 validation set.

Unified Text Structuralization with Instruction-tuned Language Models

Mar 27, 2023
Xuanfan Ni, Piji Li

Text structuralization is one of the important fields of natural language processing (NLP) consists of information extraction (IE) and structure formalization. However, current studies of text structuralization suffer from a shortage of manually annotated high-quality datasets from different domains and languages, which require specialized professional knowledge. In addition, most IE methods are designed for a specific type of structured data, e.g., entities, relations, and events, making them hard to generalize to others. In this work, we propose a simple and efficient approach to instruct large language model (LLM) to extract a variety of structures from texts. More concretely, we add a prefix and a suffix instruction to indicate the desired IE task and structure type, respectively, before feeding the text into a LLM. Experiments on two LLMs show that this approach can enable language models to perform comparable with other state-of-the-art methods on datasets of a variety of languages and knowledge, and can generalize to other IE sub-tasks via changing the content of instruction. Another benefit of our approach is that it can help researchers to build datasets in low-source and domain-specific scenarios, e.g., fields in finance and law, with low cost.

* 13 pages, 5 figures 

Zero-Shot Information Extraction via Chatting with ChatGPT

Feb 20, 2023
Xiang Wei, Xingyu Cui, Ning Cheng, Xiaobin Wang, Xin Zhang, Shen Huang, Pengjun Xie, Jinan Xu, Yufeng Chen, Meishan Zhang, Yong Jiang, Wenjuan Han

Figure 1 for Zero-Shot Information Extraction via Chatting with ChatGPT
Figure 2 for Zero-Shot Information Extraction via Chatting with ChatGPT
Figure 3 for Zero-Shot Information Extraction via Chatting with ChatGPT
Figure 4 for Zero-Shot Information Extraction via Chatting with ChatGPT

Zero-shot information extraction (IE) aims to build IE systems from the unannotated text. It is challenging due to involving little human intervention. Challenging but worthwhile, zero-shot IE reduces the time and effort that data labeling takes. Recent efforts on large language models (LLMs, e.g., GPT-3, ChatGPT) show promising performance on zero-shot settings, thus inspiring us to explore prompt-based methods. In this work, we ask whether strong IE models can be constructed by directly prompting LLMs. Specifically, we transform the zero-shot IE task into a multi-turn question-answering problem with a two-stage framework (ChatIE). With the power of ChatGPT, we extensively evaluate our framework on three IE tasks: entity-relation triple extract, named entity recognition, and event extraction. Empirical results on six datasets across two languages show that ChatIE achieves impressive performance and even surpasses some full-shot models on several datasets (e.g., NYT11-HRL). We believe that our work could shed light on building IE models with limited resources.

Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents

Feb 18, 2023
Bradley Butcher, Miri Zilka, Darren Cook, Jiri Hron, Adrian Weller

Figure 1 for Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents
Figure 2 for Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents
Figure 3 for Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents
Figure 4 for Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents

While humans can extract information from unstructured text with high precision and recall, this is often too time-consuming to be practical. Automated approaches, on the other hand, produce nearly-immediate results, but may not be reliable enough for high-stakes applications where precision is essential. In this work, we consider the benefits and drawbacks of various human-only, human-machine, and machine-only information extraction approaches. We argue for the utility of a human-in-the-loop approach in applications where high precision is required, but purely manual extraction is infeasible. We present a framework and an accompanying tool for information extraction using weak-supervision labelling with human validation. We demonstrate our approach on three criminal justice datasets. We find that the combination of computer speed and human understanding yields precision comparable to manual annotation while requiring only a fraction of time, and significantly outperforms fully automated baselines in terms of precision.

DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset

Mar 21, 2023
Hongbo Wang, Weimin Xiong, Yifan Song, Dawei Zhu, Yu Xia, Sujian Li

Figure 1 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset
Figure 2 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset
Figure 3 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset
Figure 4 for DocRED-FE: A Document-Level Fine-Grained Entity And Relation Extraction Dataset

Joint entity and relation extraction (JERE) is one of the most important tasks in information extraction. However, most existing works focus on sentence-level coarse-grained JERE, which have limitations in real-world scenarios. In this paper, we construct a large-scale document-level fine-grained JERE dataset DocRED-FE, which improves DocRED with Fine-Grained Entity Type. Specifically, we redesign a hierarchical entity type schema including 11 coarse-grained types and 119 fine-grained types, and then re-annotate DocRED manually according to this schema. Through comprehensive experiments we find that: (1) DocRED-FE is challenging to existing JERE models; (2) Our fine-grained entity types promote relation classification. We make DocRED-FE with instruction and the code for our baselines publicly available at https://github.com/PKU-TANGENT/DOCRED-FE.

* Accepted by IEEE ICASSP 2023. The first two authors contribute equally