Alert button
Picture for YuanTao Fan

YuanTao Fan

Alert button

RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs

May 25, 2023
Dongjie Yang, Ruifeng Yuan, YuanTao Fan, YiFei Yang, Zili Wang, Shusen Wang, Hai Zhao

Figure 1 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs
Figure 2 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs
Figure 3 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs
Figure 4 for RefGPT: Reference -> Truthful & Customized Dialogues Generation by GPTs and for GPTs

General chat models, like ChatGPT, have attained impressive capability to resolve a wide range of NLP tasks by tuning Large Language Models (LLMs) with high-quality instruction data. However, collecting human-written high-quality data, especially multi-turn dialogues, is expensive and unattainable for most people. Though previous studies have used powerful LLMs to generate the dialogues automatically, but they all suffer from generating untruthful dialogues because of the LLMs hallucination. Therefore, we propose a method called RefGPT to generate enormous truthful and customized dialogues without worrying about factual errors caused by the model hallucination. RefGPT solves the model hallucination in dialogue generation by restricting the LLMs to leverage the given reference instead of reciting their own knowledge to generate dialogues. Additionally, RefGPT adds detailed controls on every utterances to enable highly customization capability, which previous studies have ignored. On the basis of RefGPT, we also propose two high-quality dialogue datasets generated by GPT-4, namely RefGPT-Fact and RefGPT-Code. RefGPT-Fact is 100k multi-turn dialogue datasets based on factual knowledge and RefGPT-Code is 76k multi-turn dialogue dataset covering a wide range of coding scenarios. Our code and datasets are released in https://github.com/ziliwangnlp/RefGPT

Viaarxiv icon