Alert button
Picture for Nan Hua

Nan Hua

Alert button

LMDX: Language Model-based Document Information Extraction and Localization

Sep 19, 2023
Vincent Perot, Kai Kang, Florian Luisier, Guolong Su, Xiaoyu Sun, Ramya Sree Boppana, Zilong Wang, Jiaqi Mu, Hao Zhang, Nan Hua

Figure 1 for LMDX: Language Model-based Document Information Extraction and Localization
Figure 2 for LMDX: Language Model-based Document Information Extraction and Localization
Figure 3 for LMDX: Language Model-based Document Information Extraction and Localization
Figure 4 for LMDX: Language Model-based Document Information Extraction and Localization

Large Language Models (LLM) have revolutionized Natural Language Processing (NLP), improving state-of-the-art on many existing tasks and exhibiting emergent capabilities. However, LLMs have not yet been successfully applied on semi-structured document information extraction, which is at the core of many document processing workflows and consists of extracting key entities from a visually rich document (VRD) given a predefined target schema. The main obstacles to LLM adoption in that task have been the absence of layout encoding within LLMs, critical for a high quality extraction, and the lack of a grounding mechanism ensuring the answer is not hallucinated. In this paper, we introduce Language Model-based Document Information Extraction and Localization (LMDX), a methodology to adapt arbitrary LLMs for document information extraction. LMDX can do extraction of singular, repeated, and hierarchical entities, both with and without training data, while providing grounding guarantees and localizing the entities within the document. In particular, we apply LMDX to the PaLM 2-S LLM and evaluate it on VRDU and CORD benchmarks, setting a new state-of-the-art and showing how LMDX enables the creation of high quality, data-efficient parsers.

Viaarxiv icon

FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction

May 04, 2023
Chen-Yu Lee, Chun-Liang Li, Hao Zhang, Timothy Dozat, Vincent Perot, Guolong Su, Xiang Zhang, Kihyuk Sohn, Nikolai Glushnev, Renshen Wang, Joshua Ainslie, Shangbang Long, Siyang Qin, Yasuhisa Fujii, Nan Hua, Tomas Pfister

Figure 1 for FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction
Figure 2 for FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction
Figure 3 for FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction
Figure 4 for FormNetV2: Multimodal Graph Contrastive Learning for Form Document Information Extraction

The recent advent of self-supervised pre-training techniques has led to a surge in the use of multimodal learning in form document understanding. However, existing approaches that extend the mask language modeling to other modalities require careful multi-task tuning, complex reconstruction target designs, or additional pre-training data. In FormNetV2, we introduce a centralized multimodal graph contrastive learning strategy to unify self-supervised pre-training for all modalities in one loss. The graph contrastive objective maximizes the agreement of multimodal representations, providing a natural interplay for all modalities without special customization. In addition, we extract image features within the bounding box that joins a pair of tokens connected by a graph edge, capturing more targeted visual cues without loading a sophisticated and separately pre-trained image embedder. FormNetV2 establishes new state-of-the-art performance on FUNSD, CORD, SROIE and Payment benchmarks with a more compact model size.

* Accepted to ACL 2023 
Viaarxiv icon

Protoformer: Embedding Prototypes for Transformers

Jun 25, 2022
Ashkan Farhangi, Ning Sui, Nan Hua, Haiyan Bai, Arthur Huang, Zhishan Guo

Transformers have been widely applied in text classification. Unfortunately, real-world data contain anomalies and noisy labels that cause challenges for state-of-art Transformers. This paper proposes Protoformer, a novel self-learning framework for Transformers that can leverage problematic samples for text classification. Protoformer features a selection mechanism for embedding samples that allows us to efficiently extract and utilize anomalies prototypes and difficult class prototypes. We demonstrated such capabilities on datasets with diverse textual structures (e.g., Twitter, IMDB, ArXiv). We also applied the framework to several models. The results indicate that Protoformer can improve current Transformers in various empirical settings.

* Advances in Knowledge Discovery and Data Mining: 26th Pacific-Asia Conference, PAKDD 2022  
* Advances in Knowledge Discovery and Data Mining (PAKDD 2022) 
Viaarxiv icon

FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction

Mar 24, 2022
Chen-Yu Lee, Chun-Liang Li, Timothy Dozat, Vincent Perot, Guolong Su, Nan Hua, Joshua Ainslie, Renshen Wang, Yasuhisa Fujii, Tomas Pfister

Figure 1 for FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction
Figure 2 for FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction
Figure 3 for FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction
Figure 4 for FormNet: Structural Encoding beyond Sequential Modeling in Form Document Information Extraction

Sequence modeling has demonstrated state-of-the-art performance on natural language and document understanding tasks. However, it is challenging to correctly serialize tokens in form-like documents in practice due to their variety of layout patterns. We propose FormNet, a structure-aware sequence model to mitigate the suboptimal serialization of forms. First, we design Rich Attention that leverages the spatial relationship between tokens in a form for more precise attention score calculation. Second, we construct Super-Tokens for each word by embedding representations from their neighboring tokens through graph convolutions. FormNet therefore explicitly recovers local syntactic information that may have been lost during serialization. In experiments, FormNet outperforms existing methods with a more compact model size and less pre-training data, establishing new state-of-the-art performance on CORD, FUNSD and Payment benchmarks.

* Accepted to ACL 2022 
Viaarxiv icon

Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior

Oct 05, 2020
Zi Lin, Jeremiah Zhe Liu, Zi Yang, Nan Hua, Dan Roth

Figure 1 for Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior
Figure 2 for Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior
Figure 3 for Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior
Figure 4 for Pruning Redundant Mappings in Transformer Models via Spectral-Normalized Identity Prior

Traditional (unstructured) pruning methods for a Transformer model focus on regularizing the individual weights by penalizing them toward zero. In this work, we explore spectral-normalized identity priors (SNIP), a structured pruning approach that penalizes an entire residual module in a Transformer model toward an identity mapping. Our method identifies and discards unimportant non-linear mappings in the residual connections by applying a thresholding operator on the function norm. It is applicable to any structured module, including a single attention head, an entire attention block, or a feed-forward subnetwork. Furthermore, we introduce spectral normalization to stabilize the distribution of the post-activation values of the Transformer layers, further improving the pruning effectiveness of the proposed methodology. We conduct experiments with BERT on 5 GLUE benchmark tasks to demonstrate that SNIP achieves effective pruning results while maintaining comparable performance. Specifically, we improve the performance over the state-of-the-art by 0.5 to 1.0% on average at 50% compression ratio.

* Findings of EMNLP 2020 
Viaarxiv icon

Universal Sentence Encoder

Apr 12, 2018
Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, Ray Kurzweil

Figure 1 for Universal Sentence Encoder
Figure 2 for Universal Sentence Encoder
Figure 3 for Universal Sentence Encoder
Figure 4 for Universal Sentence Encoder

We present models for encoding sentences into embedding vectors that specifically target transfer learning to other NLP tasks. The models are efficient and result in accurate performance on diverse transfer tasks. Two variants of the encoding models allow for trade-offs between accuracy and compute resources. For both variants, we investigate and report the relationship between model complexity, resource consumption, the availability of transfer task training data, and task performance. Comparisons are made with baselines that use word level transfer learning via pretrained word embeddings as well as baselines do not use any transfer learning. We find that transfer learning using sentence embeddings tends to outperform word level transfer. With transfer learning via sentence embeddings, we observe surprisingly good performance with minimal amounts of supervised training data for a transfer task. We obtain encouraging results on Word Embedding Association Tests (WEAT) targeted at detecting model bias. Our pre-trained sentence encoding models are made freely available for download and on TF Hub.

* 7 pages; fixed module URL in Listing 1 
Viaarxiv icon