Alert button
Picture for Benjamin Han

Benjamin Han

Alert button

SM

Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation

Nov 16, 2023
Yifu Qiu, Varun Embar, Shay B. Cohen, Benjamin Han

Neural knowledge-to-text generation models often struggle to faithfully generate descriptions for the input facts: they may produce hallucinations that contradict the given facts, or describe facts not present in the input. To reduce hallucinations, we propose a novel decoding method, TWEAK (Think While Effectively Articulating Knowledge). TWEAK treats the generated sequences at each decoding step and its future sequences as hypotheses, and ranks each generation candidate based on how well their corresponding hypotheses support the input facts using a Hypothesis Verification Model (HVM). We first demonstrate the effectiveness of TWEAK by using a Natural Language Inference (NLI) model as the HVM and report improved faithfulness with minimal impact on the quality. We then replace the NLI model with our task-specific HVM trained with a first-of-a-kind dataset, FATE (Fact-Aligned Textual Entailment), which pairs input facts with their faithful and hallucinated descriptions with the hallucinated spans marked. The new HVM improves the faithfulness and the quality further and runs faster. Overall the best TWEAK variants improve on average 2.22/7.17 points on faithfulness measured by FactKB over WebNLG and TekGen/GenWiki, respectively, with only 0.14/0.32 points degradation on quality measured by BERTScore over the same datasets. Since TWEAK is a decoding-only approach, it can be integrated with any neural generative model without retraining.

Viaarxiv icon

FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge

Oct 26, 2023
Farima Fatahi Bayat, Kun Qian, Benjamin Han, Yisi Sang, Anton Belyi, Samira Khorshidi, Fei Wu, Ihab F. Ilyas, Yunyao Li

Figure 1 for FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge
Figure 2 for FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge
Figure 3 for FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge
Figure 4 for FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge

Detecting factual errors in textual information, whether generated by large language models (LLM) or curated by humans, is crucial for making informed decisions. LLMs' inability to attribute their claims to external knowledge and their tendency to hallucinate makes it difficult to rely on their responses. Humans, too, are prone to factual errors in their writing. Since manual detection and correction of factual errors is labor-intensive, developing an automatic approach can greatly reduce human effort. We present FLEEK, a prototype tool that automatically extracts factual claims from text, gathers evidence from external knowledge sources, evaluates the factuality of each claim, and suggests revisions for identified errors using the collected evidence. Initial empirical evaluation on fact error detection (77-85\% F1) shows the potential of FLEEK. A video demo of FLEEK can be found at https://youtu.be/NapJFUlkPdQ.

* EMNLP 2023 (Demonstration Track) 
Viaarxiv icon

A Better Match for Drivers and Riders: Reinforcement Learning at Lyft

Oct 20, 2023
Xabi Azagirre, Akshay Balwally, Guillaume Candeli, Nicholas Chamandy, Benjamin Han, Alona King, Hyungjun Lee, Martin Loncaric, Sébastien Martin, Vijay Narasiman, Zhiwei, Qin, Baptiste Richard, Sara Smoot, Sean Taylor, Garrett van Ryzin, Di Wu, Fei Yu, Alex Zamoshchin

Figure 1 for A Better Match for Drivers and Riders: Reinforcement Learning at Lyft
Figure 2 for A Better Match for Drivers and Riders: Reinforcement Learning at Lyft
Figure 3 for A Better Match for Drivers and Riders: Reinforcement Learning at Lyft
Figure 4 for A Better Match for Drivers and Riders: Reinforcement Learning at Lyft

To better match drivers to riders in our ridesharing application, we revised Lyft's core matching algorithm. We use a novel online reinforcement learning approach that estimates the future earnings of drivers in real time and use this information to find more efficient matches. This change was the first documented implementation of a ridesharing matching algorithm that can learn and improve in real time. We evaluated the new approach during weeks of switchback experimentation in most Lyft markets, and estimated how it benefited drivers, riders, and the platform. In particular, it enabled our drivers to serve millions of additional riders each year, leading to more than $30 million per year in incremental revenue. Lyft rolled out the algorithm globally in 2021.

Viaarxiv icon

Construction of Paired Knowledge Graph-Text Datasets Informed by Cyclic Evaluation

Sep 20, 2023
Ali Mousavi, Xin Zhan, He Bai, Peng Shi, Theo Rekatsinas, Benjamin Han, Yunyao Li, Jeff Pound, Josh Susskind, Natalie Schluter, Ihab Ilyas, Navdeep Jaitly

Figure 1 for Construction of Paired Knowledge Graph-Text Datasets Informed by Cyclic Evaluation
Figure 2 for Construction of Paired Knowledge Graph-Text Datasets Informed by Cyclic Evaluation
Figure 3 for Construction of Paired Knowledge Graph-Text Datasets Informed by Cyclic Evaluation
Figure 4 for Construction of Paired Knowledge Graph-Text Datasets Informed by Cyclic Evaluation

Datasets that pair Knowledge Graphs (KG) and text together (KG-T) can be used to train forward and reverse neural models that generate text from KG and vice versa. However models trained on datasets where KG and text pairs are not equivalent can suffer from more hallucination and poorer recall. In this paper, we verify this empirically by generating datasets with different levels of noise and find that noisier datasets do indeed lead to more hallucination. We argue that the ability of forward and reverse models trained on a dataset to cyclically regenerate source KG or text is a proxy for the equivalence between the KG and the text in the dataset. Using cyclic evaluation we find that manually created WebNLG is much better than automatically created TeKGen and T-REx. Guided by these observations, we construct a new, improved dataset called LAGRANGE using heuristics meant to improve equivalence between KG and text and show the impact of each of the heuristics on cyclic evaluation. We also construct two synthetic datasets using large language models (LLMs), and observe that these are conducive to models that perform significantly well on cyclic generation of text, but less so on cyclic generation of KGs, probably because of a lack of a consistent underlying ontology.

* 16 pages 
Viaarxiv icon

Document Summarization with Text Segmentation

Jan 20, 2023
Lesly Miculicich, Benjamin Han

Figure 1 for Document Summarization with Text Segmentation
Figure 2 for Document Summarization with Text Segmentation
Figure 3 for Document Summarization with Text Segmentation
Figure 4 for Document Summarization with Text Segmentation

In this paper, we exploit the innate document segment structure for improving the extractive summarization task. We build two text segmentation models and find the most optimal strategy to introduce their output predictions in an extractive summarization model. Experimental results on a corpus of scientific articles show that extractive summarization benefits from using a highly accurate segmentation method. In particular, most of the improvement is in documents where the most relevant information is not at the beginning thus, we conclude that segmentation helps in reducing the lead bias problem.

Viaarxiv icon

Semi-Supervised Few-Shot Intent Classification and Slot Filling

Sep 17, 2021
Samyadeep Basu, Karine lp Kiun Chong, Amr Sharaf, Alex Fischer, Vishal Rohra, Michael Amoake, Hazem El-Hammamy, Ehi Nosakhare, Vijay Ramani, Benjamin Han

Figure 1 for Semi-Supervised Few-Shot Intent Classification and Slot Filling
Figure 2 for Semi-Supervised Few-Shot Intent Classification and Slot Filling
Figure 3 for Semi-Supervised Few-Shot Intent Classification and Slot Filling
Figure 4 for Semi-Supervised Few-Shot Intent Classification and Slot Filling

Intent classification (IC) and slot filling (SF) are two fundamental tasks in modern Natural Language Understanding (NLU) systems. Collecting and annotating large amounts of data to train deep learning models for such systems is not scalable. This problem can be addressed by learning from few examples using fast supervised meta-learning techniques such as prototypical networks. In this work, we systematically investigate how contrastive learning and unsupervised data augmentation methods can benefit these existing supervised meta-learning pipelines for jointly modelled IC/SF tasks. Through extensive experiments across standard IC/SF benchmarks (SNIPS and ATIS), we show that our proposed semi-supervised approaches outperform standard supervised meta-learning methods: contrastive losses in conjunction with prototypical networks consistently outperform the existing state-of-the-art for both IC and SF tasks, while data augmentation strategies primarily improve few-shot IC by a significant margin.

Viaarxiv icon

Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents

Aug 06, 2021
Amit Gupte, Alexey Romanov, Sahitya Mantravadi, Dalitso Banda, Jianjie Liu, Raza Khan, Lakshmanan Ramu Meenal, Benjamin Han, Soundar Srinivasan

Figure 1 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents
Figure 2 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents
Figure 3 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents
Figure 4 for Lights, Camera, Action! A Framework to Improve NLP Accuracy over OCR documents

Document digitization is essential for the digital transformation of our societies, yet a crucial step in the process, Optical Character Recognition (OCR), is still not perfect. Even commercial OCR systems can produce questionable output depending on the fidelity of the scanned documents. In this paper, we demonstrate an effective framework for mitigating OCR errors for any downstream NLP task, using Named Entity Recognition (NER) as an example. We first address the data scarcity problem for model training by constructing a document synthesis pipeline, generating realistic but degraded data with NER labels. We measure the NER accuracy drop at various degradation levels and show that a text restoration model, trained on the degraded data, significantly closes the NER accuracy gaps caused by OCR errors, including on an out-of-domain dataset. For the benefit of the community, we have made the document synthesis pipeline available as an open-source project.

* Accepted to the Document Intelligence Workshop at KDD 2021. The source code of Genalog is available at https://github.com/microsoft/genalog 
Viaarxiv icon