Alert button
Picture for Sahitya Mantravadi

Sahitya Mantravadi

Alert button

Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents

Aug 06, 2021
Amit Gupte, Alexey Romanov, Sahitya Mantravadi, Dalitso Banda, Jianjie Liu, Raza Khan, Lakshmanan Ramu Meenal, Benjamin Han, Soundar Srinivasan

Figure 1 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents
Figure 2 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents
Figure 3 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents
Figure 4 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents

Document digitization is essential for the digital transformation of our societies, yet a crucial step in the process, Optical Character Recognition (OCR), is still not perfect. Even commercial OCR systems can produce questionable output depending on the fidelity of the scanned documents. In this paper, we demonstrate an effective framework for mitigating OCR errors for any downstream NLP task, using Named Entity Recognition (NER) as an example. We first address the data scarcity problem for model training by constructing a document synthesis pipeline, generating realistic but degraded data with NER labels. We measure the NER accuracy drop at various degradation levels and show that a text restoration model, trained on the degraded data, significantly closes the NER accuracy gaps caused by OCR errors, including on an out-of-domain dataset. For the benefit of the community, we have made the document synthesis pipeline available as an open-source project.

* Accepted to the Document Intelligence Workshop at KDD 2021. The source code of Genalog is available at https://github.com/microsoft/genalog 
Viaarxiv icon

Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

Jun 09, 2020
Liqun Shao, Sahitya Mantravadi, Tom Manzini, Alejandro Buendia, Manon Knoertzer, Soundar Srinivasan, Chris Quirk

Figure 1 for Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation
Figure 2 for Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation
Figure 3 for Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation
Figure 4 for Examination and Extension of Strategies for Improving Personalized Language Modeling via Interpolation

In this paper, we detail novel strategies for interpolating personalized language models and methods to handle out-of-vocabulary (OOV) tokens to improve personalized language models. Using publicly available data from Reddit, we demonstrate improvements in offline metrics at the user level by interpolating a global LSTM-based authoring model with a user-personalized n-gram model. By optimizing this approach with a back-off to uniform OOV penalty and the interpolation coefficient, we observe that over 80% of users receive a lift in perplexity, with an average of 5.2% in perplexity lift per user. In doing this research we extend previous work in building NLIs and improve the robustness of metrics for downstream tasks.

* ACL Natural Language Interface Workshop 2020, short paper 
Viaarxiv icon