Alert button
Picture for Xiaonan Li

Xiaonan Li

Alert button

LLatrieval: LLM-Verified Retrieval for Verifiable Generation

Nov 14, 2023
Xiaonan Li, Changtai Zhu, Linyang Li, Zhangyue Yin, Tianxiang Sun, Xipeng Qiu

Verifiable generation aims to let the large language model (LLM) generate text with corresponding supporting documents, which enables the user to flexibly verify the answer and makes it more trustworthy. Its evaluation not only measures the correctness of the answer, but also the answer's verifiability, i.e., how well the answer is supported by the corresponding documents. In typical, verifiable generation adopts the retrieval-read pipeline, which is divided into two stages: 1) retrieve relevant documents of the question. 2) according to the documents, generate the corresponding answer. Since the retrieved documents can supplement knowledge for the LLM to generate the answer and serve as evidence, the retrieval stage is essential for the correctness and verifiability of the answer. However, the widely used retrievers become the bottleneck of the entire pipeline and limit the overall performance. They often have fewer parameters than the large language model and have not been proven to scale well to the size of LLMs. Since the LLM passively receives the retrieval result, if the retriever does not correctly find the supporting documents, the LLM can not generate the correct and verifiable answer, which overshadows the LLM's remarkable abilities. In this paper, we propose LLatrieval (Large Language Model Verified Retrieval), where the LLM updates the retrieval result until it verifies that the retrieved documents can support answering the question. Thus, the LLM can iteratively provide feedback to retrieval and facilitate the retrieval result to sufficiently support verifiable generation. Experimental results show that our method significantly outperforms extensive baselines and achieves new state-of-the-art results.

Viaarxiv icon

Unified Demonstration Retriever for In-Context Learning

May 16, 2023
Xiaonan Li, Kai Lv, Hang Yan, Tianyang Lin, Wei Zhu, Yuan Ni, Guotong Xie, Xiaoling Wang, Xipeng Qiu

Figure 1 for Unified Demonstration Retriever for In-Context Learning
Figure 2 for Unified Demonstration Retriever for In-Context Learning
Figure 3 for Unified Demonstration Retriever for In-Context Learning
Figure 4 for Unified Demonstration Retriever for In-Context Learning

In-context learning is a new learning paradigm where a language model conditions on a few input-output pairs (demonstrations) and a test input, and directly outputs the prediction. It has been shown highly dependent on the provided demonstrations and thus promotes the research of demonstration retrieval: given a test input, relevant examples are retrieved from the training set to serve as informative demonstrations for in-context learning. While previous works focus on training task-specific retrievers for several tasks separately, these methods are often hard to transfer and scale on various tasks, and separately trained retrievers incur a lot of parameter storage and deployment cost. In this paper, we propose Unified Demonstration Retriever (\textbf{UDR}), a single model to retrieve demonstrations for a wide range of tasks. To train UDR, we cast various tasks' training signals into a unified list-wise ranking formulation by language model's feedback. Then we propose a multi-task list-wise ranking training framework, with an iterative mining strategy to find high-quality candidates, which can help UDR fully incorporate various tasks' signals. Experiments on 30+ tasks across 13 task families and multiple data domains show that UDR significantly outperforms baselines. Further analyses show the effectiveness of each proposed component and UDR's strong ability in various scenarios including different LMs (1.3B - 175B), unseen datasets, varying demonstration quantities, etc.

* ACL 2023 camera ready version 
Viaarxiv icon

MoT: Pre-thinking and Recalling Enable ChatGPT to Self-Improve with Memory-of-Thoughts

May 09, 2023
Xiaonan Li, Xipeng Qiu

Large Language Models have shown impressive abilities on various tasks. However, fundamentally improving them depends on high-quality datasets or computationally expensive fine-tuning. On the contrary, human can easily improve themselves by thinking and memory, without external resources. In this paper, we propose a framework, MoT, to let the LLM self-improve through Memory of Thoughts, without annotated datasets and parameter updates. Specifically, the framework is divided into two stages: 1. before the test stage, we let the LLM pre-think on the unlabeled dataset and save the high-confidence thoughts as external memory; 2. during inference, given a test question, we let the LLM recall relevant memory to help itself reason and answer it. Experimental results show that the proposed framework can help ChatGPT significantly improve its abilities in math reasoning, commonsense reasoning, factual reasoning and natural language inference. Further analyses show that each component contributes critically to the improvements.

Viaarxiv icon

Finding Supporting Examples for In-Context Learning

Feb 28, 2023
Xiaonan Li, Xipeng Qiu

Figure 1 for Finding Supporting Examples for In-Context Learning
Figure 2 for Finding Supporting Examples for In-Context Learning
Figure 3 for Finding Supporting Examples for In-Context Learning
Figure 4 for Finding Supporting Examples for In-Context Learning

In-context learning is a new learning paradigm where a language model observes a few examples and then straightly outputs the test input's prediction. Previous works have shown that in-context learning is sensitive to the provided examples and randomly sampled examples show significantly unstable performance. In this paper, we propose to find ``supporting examples'' for in-context learning: Given the training dataset, we need to select one permutation of a few examples, which are informative for the task's in-context learning and lead to superior performance. Although in traditional gradient-based learning, e.g., fine-tuning, there are numerous methods to find a ``coreset'' from the entire dataset, they are sub-optimal and not suitable for this problem since in-context learning occurs in the language model's inference without gradients or parameter updates. Additionally, the strong dependence among in-context examples makes this problem an NP-hard combinatorial optimization problem and enumerating all possible permutations is infeasible. Hence we propose a two-stage method to tackle this challenge. First we propose a novel metric to select informative examples based on the language model's feedback, with a progressive filtering strategy. And then we propose a diversity-guided beam search method to refine and evaluate the selected examples, iteratively. The experimental results show our method significantly outperforms a wide range of baselines, and further analyses show the effectiveness of our method and shed light on the properties of supporting examples and in-context learning.

Viaarxiv icon

Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Oct 18, 2022
Xiaonan Li, Daya Guo, Yeyun Gong, Yun Lin, Yelong Shen, Xipeng Qiu, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 2 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 3 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation
Figure 4 for Soft-Labeled Contrastive Pre-training for Function-level Code Representation

Code contrastive pre-training has recently achieved significant progress on code-related tasks. In this paper, we present \textbf{SCodeR}, a \textbf{S}oft-labeled contrastive pre-training framework with two positive sample construction methods to learn functional-level \textbf{Code} \textbf{R}epresentation. Considering the relevance between codes in a large-scale code corpus, the soft-labeled contrastive pre-training can obtain fine-grained soft-labels through an iterative adversarial manner and use them to learn better code representation. The positive sample construction is another key for contrastive pre-training. Previous works use transformation-based methods like variable renaming to generate semantically equal positive codes. However, they usually result in the generated code with a highly similar surface form, and thus mislead the model to focus on superficial code structure instead of code semantics. To encourage SCodeR to capture semantic information from the code, we utilize code comments and abstract syntax sub-trees of the code to build positive samples. We conduct experiments on four code-related tasks over seven datasets. Extensive experimental results show that SCodeR achieves new state-of-the-art performance on all of them, which illustrates the effectiveness of the proposed pre-training method.

* Accepted to EMNLP 2022 (findings) 
Viaarxiv icon

An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition

Aug 19, 2022
Hang Yan, Yu Sun, Xiaonan Li, Xipeng Qiu

Figure 1 for An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition
Figure 2 for An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition
Figure 3 for An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition
Figure 4 for An Embarrassingly Easy but Strong Baseline for Nested Named Entity Recognition

Named entity recognition (NER) is the task to detect and classify the entity spans in the text. When entity spans overlap between each other, this problem is named as nested NER. Span-based methods have been widely used to tackle the nested NER. Most of these methods will get a score $n \times n$ matrix, where $n$ means the length of sentence, and each entry corresponds to a span. However, previous work ignores spatial relations in the score matrix. In this paper, we propose using Convolutional Neural Network (CNN) to model these spatial relations in the score matrix. Despite being simple, experiments in three commonly used nested NER datasets show that our model surpasses several recently proposed methods with the same pre-trained encoders. Further analysis shows that using CNN can help the model find nested entities more accurately. Besides, we found that different papers used different sentence tokenizations for the three nested NER datasets, which will influence the comparison. Thus, we release a pre-processing script to facilitate future comparison.

* Updates for Genia dataset 
Viaarxiv icon

CodeRetriever: Unimodal and Bimodal Contrastive Learning

Jan 26, 2022
Xiaonan Li, Yeyun Gong, Yelong Shen, Xipeng Qiu, Hang Zhang, Bolun Yao, Weizhen Qi, Daxin Jiang, Weizhu Chen, Nan Duan

Figure 1 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 2 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 3 for CodeRetriever: Unimodal and Bimodal Contrastive Learning
Figure 4 for CodeRetriever: Unimodal and Bimodal Contrastive Learning

In this paper, we propose the CodeRetriever model, which combines the unimodal and bimodal contrastive learning to train function-level code semantic representations, specifically for the code search task. For unimodal contrastive learning, we design a semantic-guided method to build positive code pairs based on the documentation and function name. For bimodal contrastive learning, we leverage the documentation and in-line comments of code to build text-code pairs. Both contrastive objectives can fully leverage the large-scale code corpus for pre-training. Experimental results on several public benchmarks, (i.e., CodeSearch, CoSQA, etc.) demonstrate the effectiveness of CodeRetriever in the zero-shot setting. By fine-tuning with domain/language specified downstream data, CodeRetriever achieves the new state-of-the-art performance with significant improvement over existing code pre-trained models. We will make the code, model checkpoint, and constructed datasets publicly available.

Viaarxiv icon

Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning

Aug 31, 2021
Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, Xipeng Qiu

Figure 1 for Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Figure 2 for Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Figure 3 for Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning
Figure 4 for Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning

\textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers. When the triggers are activated, even the fine-tuned model will predict pre-defined labels, causing a security threat. These backdoors generated by the poisoning methods can be erased by changing hyper-parameters during fine-tuning or detected by finding the triggers. In this paper, we propose a stronger weight-poisoning attack method that introduces a layerwise weight poisoning strategy to plant deeper backdoors; we also introduce a combinatorial trigger that cannot be easily detected. The experiments on text classification tasks show that previous defense methods cannot resist our weight-poisoning method, which indicates that our method can be widely applied and may provide hints for future model robustness studies.

* Accepted by EMNLP2021 main conference 
Viaarxiv icon