Alert button
Picture for Sijie Cheng

Sijie Cheng

Alert button

OpenChat: Advancing Open-source Language Models with Mixed-Quality Data

Sep 20, 2023
Guan Wang, Sijie Cheng, Xianyuan Zhan, Xiangang Li, Sen Song, Yang Liu

Nowadays, open-source large language models like LLaMA have emerged. Recent developments have incorporated supervised fine-tuning (SFT) and reinforcement learning fine-tuning (RLFT) to align these models with human goals. However, SFT methods treat all training data with mixed quality equally, while RLFT methods require high-quality pairwise or ranking-based preference data. In this study, we present a novel framework, named OpenChat, to advance open-source language models with mixed-quality data. Specifically, we consider the general SFT training data, consisting of a small amount of expert data mixed with a large proportion of sub-optimal data, without any preference labels. We propose the C(onditioned)-RLFT, which regards different data sources as coarse-grained reward labels and learns a class-conditioned policy to leverage complementary data quality information. Interestingly, the optimal policy in C-RLFT can be easily solved through single-stage, RL-free supervised learning, which is lightweight and avoids costly human preference labeling. Through extensive experiments on three standard benchmarks, our openchat-13b fine-tuned with C-RLFT achieves the highest average performance among all 13b open-source language models. Moreover, we use AGIEval to validate the model generalization performance, in which only openchat-13b surpasses the base model. Finally, we conduct a series of analyses to shed light on the effectiveness and robustness of OpenChat. Our code, data, and models are publicly available at https://github.com/imoneoi/openchat.

Viaarxiv icon

Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks

May 28, 2023
Zhicheng Guo, Sijie Cheng, Yile Wang, Peng Li, Yang Liu

Figure 1 for Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks
Figure 2 for Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks
Figure 3 for Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks
Figure 4 for Prompt-Guided Retrieval Augmentation for Non-Knowledge-Intensive Tasks

Retrieval-augmented methods have received increasing attention to support downstream tasks by leveraging useful information from external resources. Recent studies mainly focus on exploring retrieval to solve knowledge-intensive (KI) tasks. However, the potential of retrieval for most non-knowledge-intensive (NKI) tasks remains under-explored. There are two main challenges to leveraging retrieval-augmented methods for NKI tasks: 1) the demand for diverse relevance score functions and 2) the dilemma between training cost and task performance. To address these challenges, we propose a two-stage framework for NKI tasks, named PGRA. In the first stage, we adopt a task-agnostic retriever to build a shared static index and select candidate evidence efficiently. In the second stage, we design a prompt-guided reranker to rerank the nearest evidence according to task-specific relevance for the reader. Experimental results show that PGRA outperforms other state-of-the-art retrieval-augmented methods. Our analyses further investigate the influence factors to model performance and demonstrate the generality of PGRA. Codes are available at https://github.com/THUNLP-MT/PGRA.

Viaarxiv icon

Evolving Connectivity for Recurrent Spiking Neural Networks

May 28, 2023
Guan Wang, Yuhao Sun, Sijie Cheng, Sen Song

Figure 1 for Evolving Connectivity for Recurrent Spiking Neural Networks
Figure 2 for Evolving Connectivity for Recurrent Spiking Neural Networks
Figure 3 for Evolving Connectivity for Recurrent Spiking Neural Networks
Figure 4 for Evolving Connectivity for Recurrent Spiking Neural Networks

Recurrent spiking neural networks (RSNNs) hold great potential for advancing artificial general intelligence, as they draw inspiration from the biological nervous system and show promise in modeling complex dynamics. However, the widely-used surrogate gradient-based training methods for RSNNs are inherently inaccurate and unfriendly to neuromorphic hardware. To address these limitations, we propose the evolving connectivity (EC) framework, an inference-only method for training RSNNs. The EC framework reformulates weight-tuning as a search into parameterized connection probability distributions, and employs Natural Evolution Strategies (NES) for optimizing these distributions. Our EC framework circumvents the need for gradients and features hardware-friendly characteristics, including sparse boolean connections and high scalability. We evaluate EC on a series of standard robotic locomotion tasks, where it achieves comparable performance with deep neural networks and outperforms gradient-trained RSNNs, even solving the complex 17-DoF humanoid task. Additionally, the EC framework demonstrates a two to three fold speedup in efficiency compared to directly evolving parameters. By providing a performant and hardware-friendly alternative, the EC framework lays the groundwork for further energy-efficient applications of RSNNs and advances the development of neuromorphic devices.

Viaarxiv icon

Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making

May 27, 2023
Xuanjie Fang, Sijie Cheng, Yang Liu, Wei Wang

Figure 1 for Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
Figure 2 for Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
Figure 3 for Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
Figure 4 for Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making

Pre-trained language models (PLMs) have been widely used to underpin various downstream tasks. However, the adversarial attack task has found that PLMs are vulnerable to small perturbations. Mainstream methods adopt a detached two-stage framework to attack without considering the subsequent influence of substitution at each step. In this paper, we formally model the adversarial attack task on PLMs as a sequential decision-making problem, where the whole attack process is sequential with two decision-making problems, i.e., word finder and word substitution. Considering the attack process can only receive the final state without any direct intermediate signals, we propose to use reinforcement learning to find an appropriate sequential attack path to generate adversaries, named SDM-Attack. Extensive experimental results show that SDM-Attack achieves the highest attack success rate with a comparable modification rate and semantic similarity to attack fine-tuned BERT. Furthermore, our analyses demonstrate the generalization and transferability of SDM-Attack. The code is available at https://github.com/fduxuan/SDM-Attack.

Viaarxiv icon

Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge

May 13, 2023
Jiangjie Chen, Wei Shi, Ziquan Fu, Sijie Cheng, Lei Li, Yanghua Xiao

Figure 1 for Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
Figure 2 for Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
Figure 3 for Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge
Figure 4 for Say What You Mean! Large Language Models Speak Too Positively about Negative Commonsense Knowledge

Large language models (LLMs) have been widely studied for their ability to store and utilize positive knowledge. However, negative knowledge, such as "lions don't live in the ocean", is also ubiquitous in the world but rarely mentioned explicitly in the text. What do LLMs know about negative knowledge? This work examines the ability of LLMs to negative commonsense knowledge. We design a constrained keywords-to-sentence generation task (CG) and a Boolean question-answering task (QA) to probe LLMs. Our experiments reveal that LLMs frequently fail to generate valid sentences grounded in negative commonsense knowledge, yet they can correctly answer polar yes-or-no questions. We term this phenomenon the belief conflict of LLMs. Our further analysis shows that statistical shortcuts and negation reporting bias from language modeling pre-training cause this conflict.

* Accepted to ACL 2023 
Viaarxiv icon

Unsupervised Explanation Generation via Correct Instantiations

Nov 21, 2022
Sijie Cheng, Zhiyong Wu, Jiangjie Chen, Zhixing Li, Yang Liu, Lingpeng Kong

Figure 1 for Unsupervised Explanation Generation via Correct Instantiations
Figure 2 for Unsupervised Explanation Generation via Correct Instantiations
Figure 3 for Unsupervised Explanation Generation via Correct Instantiations
Figure 4 for Unsupervised Explanation Generation via Correct Instantiations

While large pre-trained language models (PLM) have shown their great skills at solving discriminative tasks, a significant gap remains when compared with humans for explanation-related tasks. Among them, explaining the reason why a statement is wrong (e.g., against commonsense) is incredibly challenging. The major difficulty is finding the conflict point, where the statement contradicts our real world. This paper proposes Neon, a two-phrase, unsupervised explanation generation framework. Neon first generates corrected instantiations of the statement (phase I), then uses them to prompt large PLMs to find the conflict point and complete the explanation (phase II). We conduct extensive experiments on two standard explanation benchmarks, i.e., ComVE and e-SNLI. According to both automatic and human evaluations, Neon outperforms baselines, even for those with human-annotated instantiations. In addition to explaining a negative prediction, we further demonstrate that Neon remains effective when generalizing to different scenarios.

* Accepted to AAAI-23 
Viaarxiv icon

Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors Supervision

Mar 28, 2022
Sijie Cheng, Zhouhong Gu, Bang Liu, Rui Xie, Wei Wu, Yanghua Xiao

Figure 1 for Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors Supervision
Figure 2 for Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors Supervision
Figure 3 for Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors Supervision
Figure 4 for Learning What You Need from What You Did: Product Taxonomy Expansion with User Behaviors Supervision

Taxonomies have been widely used in various domains to underpin numerous applications. Specially, product taxonomies serve an essential role in the e-commerce domain for the recommendation, browsing, and query understanding. However, taxonomies need to constantly capture the newly emerged terms or concepts in e-commerce platforms to keep up-to-date, which is expensive and labor-intensive if it relies on manual maintenance and updates. Therefore, we target the taxonomy expansion task to attach new concepts to existing taxonomies automatically. In this paper, we present a self-supervised and user behavior-oriented product taxonomy expansion framework to append new concepts into existing taxonomies. Our framework extracts hyponymy relations that conform to users' intentions and cognition. Specifically, i) to fully exploit user behavioral information, we extract candidate hyponymy relations that match user interests from query-click concepts; ii) to enhance the semantic information of new concepts and better detect hyponymy relations, we model concepts and relations through both user-generated content and structural information in existing taxonomies and user click logs, by leveraging Pre-trained Language Models and Graph Neural Network combined with Contrastive Learning; iii) to reduce the cost of dataset construction and overcome data skews, we construct a high-quality and balanced training dataset from existing taxonomy with no supervision. Extensive experiments on real-world product taxonomies in Meituan Platform, a leading Chinese vertical e-commerce platform to order take-out with more than 70 million daily active users, demonstrate the superiority of our proposed framework over state-of-the-art methods. Notably, our method enlarges the size of real-world product taxonomies from 39,263 to 94,698 relations with 88% precision.

* Accepted by ICDE'22 
Viaarxiv icon

Can Pre-trained Language Models Interpret Similes as Smart as Human?

Mar 16, 2022
Qianyu He, Sijie Cheng, Zhixu Li, Rui Xie, Yanghua Xiao

Figure 1 for Can Pre-trained Language Models Interpret Similes as Smart as Human?
Figure 2 for Can Pre-trained Language Models Interpret Similes as Smart as Human?
Figure 3 for Can Pre-trained Language Models Interpret Similes as Smart as Human?
Figure 4 for Can Pre-trained Language Models Interpret Similes as Smart as Human?

Simile interpretation is a crucial task in natural language processing. Nowadays, pre-trained language models (PLMs) have achieved state-of-the-art performance on many tasks. However, it remains under-explored whether PLMs can interpret similes or not. In this paper, we investigate the ability of PLMs in simile interpretation by designing a novel task named Simile Property Probing, i.e., to let the PLMs infer the shared properties of similes. We construct our simile property probing datasets from both general textual corpora and human-designed questions, containing 1,633 examples covering seven main categories. Our empirical study based on the constructed datasets shows that PLMs can infer similes' shared properties while still underperforming humans. To bridge the gap with human performance, we additionally design a knowledge-enhanced training objective by incorporating the simile knowledge into PLMs via knowledge embedding methods. Our method results in a gain of 8.58% in the probing task and 1.37% in the downstream task of sentiment classification. The datasets and code are publicly available at https://github.com/Abbey4799/PLMs-Interpret-Simile.

* Accepted at ACL 2022 main conference 
Viaarxiv icon

Unsupervised Editing for Counterfactual Stories

Dec 10, 2021
Jiangjie Chen, Chun Gan, Sijie Cheng, Hao Zhou, Yanghua Xiao, Lei Li

Figure 1 for Unsupervised Editing for Counterfactual Stories
Figure 2 for Unsupervised Editing for Counterfactual Stories
Figure 3 for Unsupervised Editing for Counterfactual Stories
Figure 4 for Unsupervised Editing for Counterfactual Stories

Creating what-if stories requires reasoning about prior statements and possible outcomes of the changed conditions. One can easily generate coherent endings under new conditions, but it would be challenging for current systems to do it with minimal changes to the original story. Therefore, one major challenge is the trade-off between generating a logical story and rewriting with minimal-edits. In this paper, we propose EDUCAT, an editing-based unsupervised approach for counterfactual story rewriting. EDUCAT includes a target position detection strategy based on estimating causal effects of the what-if conditions, which keeps the causal invariant parts of the story. EDUCAT then generates the stories under fluency, coherence and minimal-edits constraints. We also propose a new metric to alleviate the shortcomings of current automatic metrics and better evaluate the trade-off. We evaluate EDUCAT on a public counterfactual story rewriting benchmark. Experiments show that EDUCAT achieves the best trade-off over unsupervised SOTA methods according to both automatic and human evaluation. The resources of EDUCAT are available at: https://github.com/jiangjiechen/EDUCAT.

* Accepted to AAAI 2022 
Viaarxiv icon

FedGEMS: Federated Learning of Larger Server Models via Selective Knowledge Fusion

Oct 21, 2021
Sijie Cheng, Jingwen Wu, Yanghua Xiao, Yang Liu, Yang Liu

Figure 1 for FedGEMS: Federated Learning of Larger Server Models via Selective Knowledge Fusion
Figure 2 for FedGEMS: Federated Learning of Larger Server Models via Selective Knowledge Fusion
Figure 3 for FedGEMS: Federated Learning of Larger Server Models via Selective Knowledge Fusion
Figure 4 for FedGEMS: Federated Learning of Larger Server Models via Selective Knowledge Fusion

Today data is often scattered among billions of resource-constrained edge devices with security and privacy constraints. Federated Learning (FL) has emerged as a viable solution to learn a global model while keeping data private, but the model complexity of FL is impeded by the computation resources of edge nodes. In this work, we investigate a novel paradigm to take advantage of a powerful server model to break through model capacity in FL. By selectively learning from multiple teacher clients and itself, a server model develops in-depth knowledge and transfers its knowledge back to clients in return to boost their respective performance. Our proposed framework achieves superior performance on both server and client models and provides several advantages in a unified framework, including flexibility for heterogeneous client architectures, robustness to poisoning attacks, and communication efficiency between clients and server. By bridging FL effectively with larger server model training, our proposed paradigm paves ways for robust and continual knowledge accumulation from distributed and private data.

* Under review as a conference paper at ICLR 2022 
Viaarxiv icon