Alert button
Picture for Hwanhee Lee

Hwanhee Lee

Alert button

IterCQR: Iterative Conversational Query Reformulation without Human Supervision

Nov 16, 2023
Yunah Jang, Kang-il Lee, Hyunkyung Bae, Seungpil Won, Hwanhee Lee, Kyomin Jung

In conversational search, which aims to retrieve passages containing essential information, queries suffer from high dependency on the preceding dialogue context. Therefore, reformulating conversational queries into standalone forms is essential for the effective utilization of off-the-shelf retrievers. Previous methodologies for conversational query search frequently depend on human-annotated gold labels. However, these manually crafted queries often result in sub-optimal retrieval performance and require high collection costs. In response to these challenges, we propose Iterative Conversational Query Reformulation (IterCQR), a methodology that conducts query reformulation without relying on human oracles. IterCQR iteratively trains the QR model by directly leveraging signal from information retrieval (IR) as a reward. Our proposed IterCQR method shows state-of-the-art performance on two datasets, demonstrating its effectiveness on both sparse and dense retrievers. Notably, IterCQR exhibits robustness in domain-shift, low-resource, and topic-shift scenarios.

Viaarxiv icon

LifeTox: Unveiling Implicit Toxicity in Life Advice

Nov 16, 2023
Minbeom Kim, Jahyun Koo, Hwanhee Lee, Joonsuk Park, Hwaran Lee, Kyomin Jung

As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce LifeTox, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, LifeTox comprises diverse contexts derived from personal experiences through open-ended questions. Experiments demonstrate that RoBERTa fine-tuned on LifeTox matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of LifeTox in addressing the complex challenges inherent in implicit toxicity.

* 8 pages, 3 figures 
Viaarxiv icon

Dialogizer: Context-aware Conversational-QA Dataset Generation from Textual Sources

Nov 09, 2023
Yerin Hwang, Yongil Kim, Hyunkyung Bae, Jeesoo Bang, Hwanhee Lee, Kyomin Jung

To address the data scarcity issue in Conversational question answering (ConvQA), a dialog inpainting method, which utilizes documents to generate ConvQA datasets, has been proposed. However, the original dialog inpainting model is trained solely on the dialog reconstruction task, resulting in the generation of questions with low contextual relevance due to insufficient learning of question-answer alignment. To overcome this limitation, we propose a novel framework called Dialogizer, which has the capability to automatically generate ConvQA datasets with high contextual relevance from textual sources. The framework incorporates two training tasks: question-answer matching (QAM) and topic-aware dialog generation (TDG). Moreover, re-ranking is conducted during the inference phase based on the contextual relevance of the generated questions. Using our framework, we produce four ConvQA datasets by utilizing documents from multiple domains as the primary source. Through automatic evaluation using diverse metrics, as well as human evaluation, we validate that our proposed framework exhibits the ability to generate datasets of higher quality compared to the baseline dialog inpainting model.

* Accepted to EMNLP 2023 main conference 
Viaarxiv icon

Asking Clarification Questions to Handle Ambiguity in Open-Domain QA

May 23, 2023
Dongryeol Lee, Segwang Kim, Minwoo Lee, Hwanhee Lee, Joonsuk Park, Sang-Woo Lee, Kyomin Jung

Figure 1 for Asking Clarification Questions to Handle Ambiguity in Open-Domain QA
Figure 2 for Asking Clarification Questions to Handle Ambiguity in Open-Domain QA
Figure 3 for Asking Clarification Questions to Handle Ambiguity in Open-Domain QA
Figure 4 for Asking Clarification Questions to Handle Ambiguity in Open-Domain QA

Ambiguous questions persist in open-domain question answering, because formulating a precise question with a unique answer is often challenging. Previously, Min et al. (2020) have tackled this issue by generating disambiguated questions for all possible interpretations of the ambiguous question. This can be effective, but not ideal for providing an answer to the user. Instead, we propose to ask a clarification question, where the user's response will help identify the interpretation that best aligns with the user's intention. We first present CAMBIGNQ, a dataset consisting of 5,654 ambiguous questions, each with relevant passages, possible answers, and a clarification question. The clarification questions were efficiently created by generating them using InstructGPT and manually revising them as necessary. We then define a pipeline of tasks and design appropriate evaluation metrics. Lastly, we achieve 61.3 F1 on ambiguity detection and 40.5 F1 on clarification-based QA, providing strong baselines for future work.

* 15 pages, 4 figures 
Viaarxiv icon

Critic-Guided Decoding for Controlled Text Generation

Dec 21, 2022
Minbeom Kim, Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung

Figure 1 for Critic-Guided Decoding for Controlled Text Generation
Figure 2 for Critic-Guided Decoding for Controlled Text Generation
Figure 3 for Critic-Guided Decoding for Controlled Text Generation
Figure 4 for Critic-Guided Decoding for Controlled Text Generation

Steering language generation towards objectives or away from undesired content has been a long-standing goal in utilizing language models (LM). Recent work has demonstrated reinforcement learning and weighted decoding as effective approaches to achieve a higher level of language control and quality with pros and cons. In this work, we propose a novel critic decoding method for controlled language generation (CriticControl) that combines the strengths of reinforcement learning and weighted decoding. Specifically, we adopt the actor-critic framework to train an LM-steering critic from non-differentiable reward models. And similar to weighted decoding, our method freezes the language model and manipulates the output token distribution using called critic, improving training efficiency and stability. Evaluation of our method on three controlled generation tasks, namely topic control, sentiment control, and detoxification, shows that our approach generates more coherent and well-controlled texts than previous methods. In addition, CriticControl demonstrates superior generalization ability in zero-shot settings. Human evaluation studies also corroborate our findings.

* 11 pages, 6 figures 
Viaarxiv icon

Attribution-based Task-specific Pruning for Multi-task Language Models

May 09, 2022
Nakyeong Yang, Yunah Jang, Hwanhee Lee, Seohyeong Jung, Kyomin Jung

Figure 1 for Attribution-based Task-specific Pruning for Multi-task Language Models
Figure 2 for Attribution-based Task-specific Pruning for Multi-task Language Models
Figure 3 for Attribution-based Task-specific Pruning for Multi-task Language Models
Figure 4 for Attribution-based Task-specific Pruning for Multi-task Language Models

Multi-task language models show outstanding performance for various natural language understanding tasks with only a single model. However, these language models inevitably utilize unnecessary large-scale model parameters, even when they are used for only a specific task. In this paper, we propose a novel training-free task-specific pruning method for multi-task language models. Specifically, we utilize an attribution method to compute the importance of each neuron for performing a specific task. Then, we prune task-specifically unimportant neurons using this computed importance. Experimental results on the six widely-used datasets show that our proposed pruning method significantly outperforms baseline compression methods. Also, we extend our method to be applicable in a low-resource setting, where the number of labeled datasets is insufficient.

* 5 pages, 5 figures 
Viaarxiv icon

Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking

May 04, 2022
Hwanhee Lee, Kang Min Yoo, Joonsuk Park, Hwaran Lee, Kyomin Jung

Figure 1 for Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking
Figure 2 for Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking
Figure 3 for Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking
Figure 4 for Masked Summarization to Generate Factually Inconsistent Summaries for Improved Factual Consistency Checking

Despite the recent advances in abstractive summarization systems, it is still difficult to determine whether a generated summary is factual consistent with the source text. To this end, the latest approach is to train a factual consistency classifier on factually consistent and inconsistent summaries. Luckily, the former is readily available as reference summaries in existing summarization datasets. However, generating the latter remains a challenge, as they need to be factually inconsistent, yet closely relevant to the source text to be effective. In this paper, we propose to generate factually inconsistent summaries using source texts and reference summaries with key information masked. Experiments on seven benchmark datasets demonstrate that factual consistency classifiers trained on summaries generated using our method generally outperform existing models and show a competitive correlation with human judgments. We also analyze the characteristics of the summaries generated using our method. We will release the pre-trained model and the code at https://github.com/hwanheelee1993/MFMA.

* NAACL 2022 Findings 
Viaarxiv icon

Factual Error Correction for Abstractive Summaries Using Entity Retrieval

Apr 18, 2022
Hwanhee Lee, Cheoneum Park, Seunghyun Yoon, Trung Bui, Franck Dernoncourt, Juae Kim, Kyomin Jung

Figure 1 for Factual Error Correction for Abstractive Summaries Using Entity Retrieval
Figure 2 for Factual Error Correction for Abstractive Summaries Using Entity Retrieval
Figure 3 for Factual Error Correction for Abstractive Summaries Using Entity Retrieval
Figure 4 for Factual Error Correction for Abstractive Summaries Using Entity Retrieval

Despite the recent advancements in abstractive summarization systems leveraged from large-scale datasets and pre-trained language models, the factual correctness of the summary is still insufficient. One line of trials to mitigate this problem is to include a post-editing process that can detect and correct factual errors in the summary. In building such a post-editing system, it is strongly required that 1) the process has a high success rate and interpretability and 2) has a fast running time. Previous approaches focus on regeneration of the summary using the autoregressive models, which lack interpretability and require high computing resources. In this paper, we propose an efficient factual error correction system RFEC based on entities retrieval post-editing process. RFEC first retrieves the evidence sentences from the original document by comparing the sentences with the target summary. This approach greatly reduces the length of text for a system to analyze. Next, RFEC detects the entity-level errors in the summaries by considering the evidence sentences and substitutes the wrong entities with the accurate entities from the evidence sentences. Experimental results show that our proposed error correction system shows more competitive performance than baseline methods in correcting the factual errors with a much faster speed.

* 6 pages, 3 figures 
Viaarxiv icon

CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models

Sep 30, 2021
Minwoo Lee, Seungpil Won, Juae Kim, Hwanhee Lee, Cheoneum Park, Kyomin Jung

Figure 1 for CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models
Figure 2 for CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models
Figure 3 for CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models
Figure 4 for CrossAug: A Contrastive Data Augmentation Method for Debiasing Fact Verification Models

Fact verification datasets are typically constructed using crowdsourcing techniques due to the lack of text sources with veracity labels. However, the crowdsourcing process often produces undesired biases in data that cause models to learn spurious patterns. In this paper, we propose CrossAug, a contrastive data augmentation method for debiasing fact verification models. Specifically, we employ a two-stage augmentation pipeline to generate new claims and evidences from existing samples. The generated samples are then paired cross-wise with the original pair, forming contrastive samples that facilitate the model to rely less on spurious patterns and learn more robust representations. Experimental results show that our method outperforms the previous state-of-the-art debiasing technique by 3.6% on the debiased extension of the FEVER dataset, with a total performance boost of 10.13% from the baseline. Furthermore, we evaluate our approach in data-scarce settings, where models can be more susceptible to biases due to the lack of training data. Experimental results demonstrate that our approach is also effective at debiasing in these low-resource conditions, exceeding the baseline performance on the Symmetric dataset with just 1% of the original data.

* 5 pages, accepted as a short paper at CIKM 2021 
Viaarxiv icon

QACE: Asking Questions to Evaluate an Image Caption

Aug 28, 2021
Hwanhee Lee, Thomas Scialom, Seunghyun Yoon, Franck Dernoncourt, Kyomin Jung

Figure 1 for QACE: Asking Questions to Evaluate an Image Caption
Figure 2 for QACE: Asking Questions to Evaluate an Image Caption
Figure 3 for QACE: Asking Questions to Evaluate an Image Caption
Figure 4 for QACE: Asking Questions to Evaluate an Image Caption

In this paper, we propose QACE, a new metric based on Question Answering for Caption Evaluation. QACE generates questions on the evaluated caption and checks its content by asking the questions on either the reference caption or the source image. We first develop QACE-Ref that compares the answers of the evaluated caption to its reference, and report competitive results with the state-of-the-art metrics. To go further, we propose QACE-Img, which asks the questions directly on the image, instead of reference. A Visual-QA system is necessary for QACE-Img. Unfortunately, the standard VQA models are framed as a classification among only a few thousand categories. Instead, we propose Visual-T5, an abstractive VQA system. The resulting metric, QACE-Img is multi-modal, reference-less, and explainable. Our experiments show that QACE-Img compares favorably w.r.t. other reference-less metrics. We will release the pre-trained models to compute QACE.

* EMNLP 2021 Findings 
Viaarxiv icon