Alert button
Picture for Yixuan Cao

Yixuan Cao

Alert button

Guideline Learning for In-context Information Extraction

Oct 21, 2023
Chaoxu Pang, Yixuan Cao, Qiang Ding, Ping Luo

Figure 1 for Guideline Learning for In-context Information Extraction
Figure 2 for Guideline Learning for In-context Information Extraction
Figure 3 for Guideline Learning for In-context Information Extraction
Figure 4 for Guideline Learning for In-context Information Extraction

Large language models (LLMs) can perform a new task by merely conditioning on task instructions and a few input-output examples, without optimizing any parameters. This is called In-Context Learning (ICL). In-context Information Extraction (IE) has recently garnered attention in the research community. However, the performance of In-context IE generally lags behind the state-of-the-art supervised expert models. We highlight a key reason for this shortfall: underspecified task description. The limited-length context struggles to thoroughly express the intricate IE task instructions and various edge cases, leading to misalignment in task comprehension with humans. In this paper, we propose a Guideline Learning (GL) framework for In-context IE which reflectively learns and follows guidelines. During the learning phrase, GL automatically synthesizes a set of guidelines based on a few error cases, and during inference, GL retrieves helpful guidelines for better ICL. Moreover, we propose a self-consistency-based active learning method to enhance the efficiency of GL. Experiments on event extraction and relation extraction show that GL can significantly improve the performance of in-context IE.

* EMNLP 2023 main conference 
Viaarxiv icon

Extracting Variable-Depth Logical Document Hierarchy from Long Documents: Method, Evaluation, and Application

May 14, 2021
Rongyu Cao, Yixuan Cao, Ganbin Zhou, Ping Luo

Figure 1 for Extracting Variable-Depth Logical Document Hierarchy from Long Documents: Method, Evaluation, and Application
Figure 2 for Extracting Variable-Depth Logical Document Hierarchy from Long Documents: Method, Evaluation, and Application
Figure 3 for Extracting Variable-Depth Logical Document Hierarchy from Long Documents: Method, Evaluation, and Application
Figure 4 for Extracting Variable-Depth Logical Document Hierarchy from Long Documents: Method, Evaluation, and Application

In this paper, we study the problem of extracting variable-depth "logical document hierarchy" from long documents, namely organizing the recognized "physical document objects" into hierarchical structures. The discovery of logical document hierarchy is the vital step to support many downstream applications. However, long documents, containing hundreds or even thousands of pages and variable-depth hierarchy, challenge the existing methods. To address these challenges, we develop a framework, namely Hierarchy Extraction from Long Document (HELD), where we "sequentially" insert each physical object at the proper on of the current tree. Determining whether each possible position is proper or not can be formulated as a binary classification problem. To further improve its effectiveness and efficiency, we study the design variants in HELD, including traversal orders of the insertion positions, heading extraction explicitly or implicitly, tolerance to insertion errors in predecessor steps, and so on. The empirical experiments based on thousands of long documents from Chinese, English financial market and English scientific publication show that the HELD model with the "root-to-leaf" traversal order and explicit heading extraction is the best choice to achieve the tradeoff between effectiveness and efficiency with the accuracy of 0.9726, 0.7291 and 0.9578 in Chinese financial, English financial and arXiv datasets, respectively. Finally, we show that logical document hierarchy can be employed to significantly improve the performance of the downstream passage retrieval task. In summary, we conduct a systematic study on this task in terms of methods, evaluations, and applications.

* Journal of computer science and technology, 2021  
* 23 pages, 10 figures, Journal of computer science and technology 
Viaarxiv icon