Alert button
Picture for Panfeng Cao

Panfeng Cao

Alert button

GenKIE: Robust Generative Multimodal Document Key Information Extraction

Oct 24, 2023
Panfeng Cao, Ye Wang, Qiang Zhang, Zaiqiao Meng

Figure 1 for GenKIE: Robust Generative Multimodal Document Key Information Extraction
Figure 2 for GenKIE: Robust Generative Multimodal Document Key Information Extraction
Figure 3 for GenKIE: Robust Generative Multimodal Document Key Information Extraction
Figure 4 for GenKIE: Robust Generative Multimodal Document Key Information Extraction

Key information extraction (KIE) from scanned documents has gained increasing attention because of its applications in various domains. Although promising results have been achieved by some recent KIE approaches, they are usually built based on discriminative models, which lack the ability to handle optical character recognition (OCR) errors and require laborious token-level labelling. In this paper, we propose a novel generative end-to-end model, named GenKIE, to address the KIE task. GenKIE is a sequence-to-sequence multimodal generative model that utilizes multimodal encoders to embed visual, layout and textual features and a decoder to generate the desired output. Well-designed prompts are leveraged to incorporate the label semantics as the weakly supervised signals and entice the generation of the key information. One notable advantage of the generative model is that it enables automatic correction of OCR errors. Besides, token-level granular annotation is not required. Extensive experiments on multiple public real-world datasets show that GenKIE effectively generalizes over different types of documents and achieves state-of-the-art results. Our experiments also validate the model's robustness against OCR errors, making GenKIE highly applicable in real-world scenarios.

* Accepted by EMNLP 2023, Findings paper 
Viaarxiv icon