Alert button
Picture for Ya Guo

Ya Guo

Alert button

Reading Order Matters: Information Extraction from Visually-rich Documents by Token Path Prediction

Oct 17, 2023
Chong Zhang, Ya Guo, Yi Tu, Huan Chen, Jinyang Tang, Huijia Zhu, Qi Zhang, Tao Gui

Figure 1 for Reading Order Matters: Information Extraction from Visually-rich Documents by Token Path Prediction
Figure 2 for Reading Order Matters: Information Extraction from Visually-rich Documents by Token Path Prediction
Figure 3 for Reading Order Matters: Information Extraction from Visually-rich Documents by Token Path Prediction
Figure 4 for Reading Order Matters: Information Extraction from Visually-rich Documents by Token Path Prediction

Recent advances in multimodal pre-trained models have significantly improved information extraction from visually-rich documents (VrDs), in which named entity recognition (NER) is treated as a sequence-labeling task of predicting the BIO entity tags for tokens, following the typical setting of NLP. However, BIO-tagging scheme relies on the correct order of model inputs, which is not guaranteed in real-world NER on scanned VrDs where text are recognized and arranged by OCR systems. Such reading order issue hinders the accurate marking of entities by BIO-tagging scheme, making it impossible for sequence-labeling methods to predict correct named entities. To address the reading order issue, we introduce Token Path Prediction (TPP), a simple prediction head to predict entity mentions as token sequences within documents. Alternative to token classification, TPP models the document layout as a complete directed graph of tokens, and predicts token paths within the graph as entities. For better evaluation of VrD-NER systems, we also propose two revised benchmark datasets of NER on scanned documents which can reflect real-world scenarios. Experiment results demonstrate the effectiveness of our method, and suggest its potential to be a universal solution to various information extraction tasks on documents.

* Accepted as a long paper in the main conference of EMNLP 2023 
Viaarxiv icon

LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding

Jun 09, 2023
Yi Tu, Ya Guo, Huan Chen, Jinyang Tang

Figure 1 for LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding
Figure 2 for LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding
Figure 3 for LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding
Figure 4 for LayoutMask: Enhance Text-Layout Interaction in Multi-modal Pre-training for Document Understanding

Visually-rich Document Understanding (VrDU) has attracted much research attention over the past years. Pre-trained models on a large number of document images with transformer-based backbones have led to significant performance gains in this field. The major challenge is how to fusion the different modalities (text, layout, and image) of the documents in a unified model with different pre-training tasks. This paper focuses on improving text-layout interactions and proposes a novel multi-modal pre-training model, LayoutMask. LayoutMask uses local 1D position, instead of global 1D position, as layout input and has two pre-training objectives: (1) Masked Language Modeling: predicting masked tokens with two novel masking strategies; (2) Masked Position Modeling: predicting masked 2D positions to improve layout representation learning. LayoutMask can enhance the interactions between text and layout modalities in a unified model and produce adaptive and robust multi-modal representations for downstream tasks. Experimental results show that our proposed method can achieve state-of-the-art results on a wide variety of VrDU problems, including form understanding, receipt understanding, and document image classification.

* Accepted by ACL 2023 main conference 
Viaarxiv icon

Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level context memory

Aug 16, 2022
Jingru Zhu, Ya Guo, Geng Sun, Libo Yang, Min Deng, Jie Chen

Figure 1 for Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level context memory
Figure 2 for Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level context memory
Figure 3 for Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level context memory
Figure 4 for Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level context memory

Semantic segmentation is a key technique involved in automatic interpretation of high-resolution remote sensing (HRS) imagery and has drawn much attention in the remote sensing community. Deep convolutional neural networks (DCNNs) have been successfully applied to the HRS imagery semantic segmentation task due to their hierarchical representation ability. However, the heavy dependency on a large number of training data with dense annotation and the sensitiveness to the variation of data distribution severely restrict the potential application of DCNNs for the semantic segmentation of HRS imagery. This study proposes a novel unsupervised domain adaptation semantic segmentation network (MemoryAdaptNet) for the semantic segmentation of HRS imagery. MemoryAdaptNet constructs an output space adversarial learning scheme to bridge the domain distribution discrepancy between source domain and target domain and to narrow the influence of domain shift. Specifically, we embed an invariant feature memory module to store invariant domain-level context information because the features obtained from adversarial learning only tend to represent the variant feature of current limited inputs. This module is integrated by a category attention-driven invariant domain-level context aggregation module to current pseudo invariant feature for further augmenting the pixel representations. An entropy-based pseudo label filtering strategy is used to update the memory module with high-confident pseudo invariant feature of current target images. Extensive experiments under three cross-domain tasks indicate that our proposed MemoryAdaptNet is remarkably superior to the state-of-the-art methods.

* Submitted to IEEE Transactions on Geoscience and Remote Sensing (IEEE TGRS), 17 pages, 12 figures and 8 tables 
Viaarxiv icon

A comprehensive benchmark analysis for sand dust image reconstruction

Feb 07, 2022
Yazhong Si, Fan Yang, Ya Guo, Wei Zhang, Yipu Yang

Figure 1 for A comprehensive benchmark analysis for sand dust image reconstruction
Figure 2 for A comprehensive benchmark analysis for sand dust image reconstruction
Figure 3 for A comprehensive benchmark analysis for sand dust image reconstruction
Figure 4 for A comprehensive benchmark analysis for sand dust image reconstruction

Numerous sand dust image enhancement algorithms have been proposed in recent years. To our best acknowledge, however, most methods evaluated their performance with no-reference way using few selected real-world images from internet. It is unclear how to quantitatively analysis the performance of the algorithms in a supervised way and how we could gauge the progress in the field. Moreover, due to the absence of large-scale benchmark datasets, there are no well-known reports of data-driven based method for sand dust image enhancement up till now. To advance the development of deep learning-based algorithms for sand dust image reconstruction, while enabling supervised objective evaluation of algorithm performance. In this paper, we presented a comprehensive perceptual study and analysis of real-world sand dust images, then constructed a Sand-dust Image Reconstruction Benchmark (SIRB) for training Convolutional Neural Networks (CNNs) and evaluating algorithms performance. In addition, we adopted the existing image transformation neural network trained on SIRB as baseline to illustrate the generalization of SIRB for training CNNs. Finally, we conducted the qualitative and quantitative evaluation to demonstrate the performance and limitations of the state-of-the-arts (SOTA), which shed light on future research in sand dust image reconstruction.

* 13 pages, 12 figures 
Viaarxiv icon