Alert button
Picture for Ruifeng Yuan

Ruifeng Yuan

Alert button

RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs

May 25, 2023
Dongjie Yang, Ruifeng Yuan, YuanTao Fan, YiFei Yang, Zili Wang, Shusen Wang, Hai Zhao

Figure 1 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs
Figure 2 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs
Figure 3 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs
Figure 4 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs

General chat models, like ChatGPT, have attained impressive capability to resolve a wide range of NLP tasks by tuning Large Language Models (LLMs) with high-quality instruction data. However, collecting human-written high-quality data, especially multi-turn dialogues, is expensive and unattainable for most people. Though previous studies have used powerful LLMs to generate the dialogues automatically, but they all suffer from generating untruthful dialogues because of the LLMs hallucination. Therefore, we propose a method called RefGPT to generate enormous truthful and customized dialogues without worrying about factual errors caused by the model hallucination. RefGPT solves the model hallucination in dialogue generation by restricting the LLMs to leverage the given reference instead of reciting their own knowledge to generate dialogues. Additionally, RefGPT adds detailed controls on every utterances to enable highly customization capability, which previous studies have ignored. On the basis of RefGPT, we also propose two high-quality dialogue datasets generated by GPT-4, namely RefGPT-Fact and RefGPT-Code. RefGPT-Fact is 100k multi-turn dialogue datasets based on factual knowledge and RefGPT-Code is 76k multi-turn dialogue dataset covering a wide range of coding scenarios. Our code and datasets are released in https://github.com/ziliwangnlp/RefGPT

Viaarxiv icon

Improving Sentence Similarity Estimation for Unsupervised Extractive Summarization

Feb 24, 2023
Shichao Sun, Ruifeng Yuan, Wenjie Li, Sujian Li

Figure 1 for Improving Sentence Similarity Estimation for Unsupervised Extractive Summarization
Figure 2 for Improving Sentence Similarity Estimation for Unsupervised Extractive Summarization
Figure 3 for Improving Sentence Similarity Estimation for Unsupervised Extractive Summarization

Unsupervised extractive summarization aims to extract salient sentences from a document as the summary without labeled data. Recent literatures mostly research how to leverage sentence similarity to rank sentences in the order of salience. However, sentence similarity estimation using pre-trained language models mostly takes little account of document-level information and has a weak correlation with sentence salience ranking. In this paper, we proposed two novel strategies to improve sentence similarity estimation for unsupervised extractive summarization. We use contrastive learning to optimize a document-level objective that sentences from the same document are more similar than those from different documents. Moreover, we use mutual learning to enhance the relationship between sentence similarity estimation and sentence salience ranking, where an extra signal amplifier is used to refine the pivotal information. Experimental results demonstrate the effectiveness of our strategies.

* Accepted by ICASSP 2023 
Viaarxiv icon

Few-shot Query-Focused Summarization with Prefix-Merging

Nov 29, 2022
Ruifeng Yuan, Zili Wang, Ziqiang Cao, Wenjie Li

Figure 1 for Few-shot Query-Focused Summarization with Prefix-Merging
Figure 2 for Few-shot Query-Focused Summarization with Prefix-Merging
Figure 3 for Few-shot Query-Focused Summarization with Prefix-Merging
Figure 4 for Few-shot Query-Focused Summarization with Prefix-Merging

Query-focused summarization has been considered as an important extension for text summarization. It aims to generate a concise highlight for a given query. Different from text summarization, query-focused summarization has long been plagued by the problem of lacking high-quality large-scale datasets. In this paper, we investigate the idea that whether we can integrate and transfer the knowledge of text summarization and question answering to assist the few-shot learning in query-focused summarization. Here, we propose prefix-merging, a prefix-based pretraining strategy for few-shot learning in query-focused summarization. Drawn inspiration from prefix-tuning, we are allowed to integrate the task knowledge from text summarization and question answering into a properly designed prefix and apply the merged prefix to query-focused summarization. With only a small amount of trainable parameters, prefix-merging outperforms fine-tuning on query-focused summarization. We further discuss the influence of different prefix designs and propose a visualized explanation for how prefix-merging works.

* Accepted by EMNLP2022 
Viaarxiv icon

Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT

Nov 19, 2020
Ruifeng Yuan, Zili Wang, Wenjie Li

Figure 1 for Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT
Figure 2 for Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT
Figure 3 for Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT
Figure 4 for Fact-level Extractive Summarization with Hierarchical Graph Mask on BERT

Most current extractive summarization models generate summaries by selecting salient sentences. However, one of the problems with sentence-level extractive summarization is that there exists a gap between the human-written gold summary and the oracle sentence labels. In this paper, we propose to extract fact-level semantic units for better extractive summarization. We also introduce a hierarchical structure, which incorporates the multi-level of granularities of the textual information into the model. In addition, we incorporate our model with BERT using a hierarchical graph mask. This allows us to combine BERT's ability in natural language understanding and the structural information without increasing the scale of the model. Experiments on the CNN/DaliyMail dataset show that our model achieves state-of-the-art results.

* Accept by Coling2020 
Viaarxiv icon