Alert button
Picture for Zhuo Zhang

Zhuo Zhang

Alert button

ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP

Aug 04, 2023
Lu Yan, Zhuo Zhang, Guanhong Tao, Kaiyuan Zhang, Xuan Chen, Guangyu Shen, Xiangyu Zhang

Figure 1 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Figure 2 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Figure 3 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP
Figure 4 for ParaFuzz: An Interpretability-Driven Technique for Detecting Poisoned Samples in NLP

Backdoor attacks have emerged as a prominent threat to natural language processing (NLP) models, where the presence of specific triggers in the input can lead poisoned models to misclassify these inputs to predetermined target classes. Current detection mechanisms are limited by their inability to address more covert backdoor strategies, such as style-based attacks. In this work, we propose an innovative test-time poisoned sample detection framework that hinges on the interpretability of model predictions, grounded in the semantic meaning of inputs. We contend that triggers (e.g., infrequent words) are not supposed to fundamentally alter the underlying semantic meanings of poisoned samples as they want to stay stealthy. Based on this observation, we hypothesize that while the model's predictions for paraphrased clean samples should remain stable, predictions for poisoned samples should revert to their true labels upon the mutations applied to triggers during the paraphrasing process. We employ ChatGPT, a state-of-the-art large language model, as our paraphraser and formulate the trigger-removal task as a prompt engineering problem. We adopt fuzzing, a technique commonly used for unearthing software vulnerabilities, to discover optimal paraphrase prompts that can effectively eliminate triggers while concurrently maintaining input semantics. Experiments on 4 types of backdoor attacks, including the subtle style backdoors, and 4 distinct datasets demonstrate that our approach surpasses baseline methods, including STRIP, RAP, and ONION, in precision and recall.

Viaarxiv icon

When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

Dec 20, 2022
Zhuo Zhang, Yuanhang Yang, Yong Dai, Lizhen Qu, Zenglin Xu

Figure 1 for When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods
Figure 2 for When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods
Figure 3 for When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods
Figure 4 for When Federated Learning Meets Pre-trained Language Models' Parameter-Efficient Tuning Methods

With increasing privacy concerns on data, recent studies have made significant progress using federated learning (FL) on privacy-sensitive natural language processing (NLP) tasks. Much literature suggests fully fine-tuning pre-trained language models (PLMs) in the FL paradigm can mitigate the data heterogeneity problem and close the performance gap with centralized training. However, large PLMs bring the curse of prohibitive communication overhead and local model adaptation costs for the FL system. To this end, we introduce various parameter-efficient tuning (PETuning) methods into federated learning. Specifically, we provide a holistic empirical study of representative PLMs tuning methods in FL. The experimental results cover the analysis of data heterogeneity levels, data scales, and different FL scenarios. Overall communication overhead can be significantly reduced by locally tuning and globally aggregating lightweight model parameters while maintaining acceptable performance in various FL settings. To facilitate the research of PETuning in FL, we also develop a federated tuning framework FedPETuning, which allows practitioners to exploit different PETuning methods under the FL training paradigm conveniently. The source code is available at \url{https://github.com/iezhuozhuo/FedETuning/tree/deltaTuning}.

Viaarxiv icon

Backdoor Vulnerabilities in Normally Trained Deep Learning Models

Nov 29, 2022
Guanhong Tao, Zhenting Wang, Siyuan Cheng, Shiqing Ma, Shengwei An, Yingqi Liu, Guangyu Shen, Zhuo Zhang, Yunshu Mao, Xiangyu Zhang

Figure 1 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 2 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 3 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models
Figure 4 for Backdoor Vulnerabilities in Normally Trained Deep Learning Models

We conduct a systematic study of backdoor vulnerabilities in normally trained Deep Learning models. They are as dangerous as backdoors injected by data poisoning because both can be equally exploited. We leverage 20 different types of injected backdoor attacks in the literature as the guidance and study their correspondences in normally trained models, which we call natural backdoor vulnerabilities. We find that natural backdoors are widely existing, with most injected backdoor attacks having natural correspondences. We categorize these natural backdoors and propose a general detection framework. It finds 315 natural backdoors in the 56 normally trained models downloaded from the Internet, covering all the different categories, while existing scanners designed for injected backdoors can at most detect 65 backdoors. We also study the root causes and defense of natural backdoors.

Viaarxiv icon

Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition

Sep 17, 2022
Ye Bai, Jie Li, Wenjing Han, Hao Ni, Kaituo Xu, Zhuo Zhang, Cheng Yi, Xiaorui Wang

Figure 1 for Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition
Figure 2 for Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition
Figure 3 for Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition
Figure 4 for Parameter-Efficient Conformers via Sharing Sparsely-Gated Experts for End-to-End Speech Recognition

While transformers and their variant conformers show promising performance in speech recognition, the parameterized property leads to much memory cost during training and inference. Some works use cross-layer weight-sharing to reduce the parameters of the model. However, the inevitable loss of capacity harms the model performance. To address this issue, this paper proposes a parameter-efficient conformer via sharing sparsely-gated experts. Specifically, we use sparsely-gated mixture-of-experts (MoE) to extend the capacity of a conformer block without increasing computation. Then, the parameters of the grouped conformer blocks are shared so that the number of parameters is reduced. Next, to ensure the shared blocks with the flexibility of adapting representations at different levels, we design the MoE routers and normalization individually. Moreover, we use knowledge distillation to further improve the performance. Experimental results show that the proposed model achieves competitive performance with 1/3 of the parameters of the encoder, compared with the full-parameter model.

* accepted in INTERSPEECH 2022 
Viaarxiv icon

DECK: Model Hardening for Defending Pervasive Backdoors

Jun 18, 2022
Guanhong Tao, Yingqi Liu, Siyuan Cheng, Shengwei An, Zhuo Zhang, Qiuling Xu, Guangyu Shen, Xiangyu Zhang

Figure 1 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 2 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 3 for DECK: Model Hardening for Defending Pervasive Backdoors
Figure 4 for DECK: Model Hardening for Defending Pervasive Backdoors

Pervasive backdoors are triggered by dynamic and pervasive input perturbations. They can be intentionally injected by attackers or naturally exist in normally trained models. They have a different nature from the traditional static and localized backdoors that can be triggered by perturbing a small input area with some fixed pattern, e.g., a patch with solid color. Existing defense techniques are highly effective for traditional backdoors. However, they may not work well for pervasive backdoors, especially regarding backdoor removal and model hardening. In this paper, we propose a novel model hardening technique against pervasive backdoors, including both natural and injected backdoors. We develop a general pervasive attack based on an encoder-decoder architecture enhanced with a special transformation layer. The attack can model a wide range of existing pervasive backdoor attacks and quantify them by class distances. As such, using the samples derived from our attack in adversarial training can harden a model against these backdoor vulnerabilities. Our evaluation on 9 datasets with 15 model structures shows that our technique can enlarge class distances by 59.65% on average with less than 1% accuracy degradation and no robustness loss, outperforming five hardening techniques such as adversarial training, universal adversarial training, MOTH, etc. It can reduce the attack success rate of six pervasive backdoor attacks from 99.06% to 1.94%, surpassing seven state-of-the-art backdoor removal techniques.

Viaarxiv icon

Do Deep Neural Networks Always Perform Better When Eating More Data?

May 30, 2022
Jiachen Yang, Zhuo Zhang, Yicheng Gong, Shukun Ma, Xiaolan Guo, Yue Yang, Shuai Xiao, Jiabao Wen, Yang Li, Xinbo Gao, Wen Lu, Qinggang Meng

Figure 1 for Do Deep Neural Networks Always Perform Better When Eating More Data?
Figure 2 for Do Deep Neural Networks Always Perform Better When Eating More Data?
Figure 3 for Do Deep Neural Networks Always Perform Better When Eating More Data?
Figure 4 for Do Deep Neural Networks Always Perform Better When Eating More Data?

Data has now become a shortcoming of deep learning. Researchers in their own fields share the thinking that "deep neural networks might not always perform better when they eat more data," which still lacks experimental validation and a convincing guiding theory. Here to fill this lack, we design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD), which give powerful answers. For the purpose of guidance, based on the discussion of results, two theories are proposed: under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of sample information and the amount of class information; under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain. The above theories provide guidance from the perspective of data, which can promote a wide range of practical applications of artificial intelligence.

Viaarxiv icon

Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense

Feb 11, 2022
Guangyu Shen, Yingqi Liu, Guanhong Tao, Qiuling Xu, Zhuo Zhang, Shengwei An, Shiqing Ma, Xiangyu Zhang

Figure 1 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 2 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 3 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Figure 4 for Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense

We develop a novel optimization method for NLPbackdoor inversion. We leverage a dynamically reducing temperature coefficient in the softmax function to provide changing loss landscapes to the optimizer such that the process gradually focuses on the ground truth trigger, which is denoted as a one-hot value in a convex hull. Our method also features a temperature rollback mechanism to step away from local optimals, exploiting the observation that local optimals can be easily deter-mined in NLP trigger inversion (while not in general optimization). We evaluate the technique on over 1600 models (with roughly half of them having injected backdoors) on 3 prevailing NLP tasks, with 4 different backdoor attacks and 7 architectures. Our results show that the technique is able to effectively and efficiently detect and remove backdoors, outperforming 4 baseline methods.

Viaarxiv icon

Seizure Detection using Least EEG Channels by Deep Convolutional Neural Network

Jan 14, 2019
Mustafa Talha Avcu, Zhuo Zhang, Derrick Wei Shih Chan

Figure 1 for Seizure Detection using Least EEG Channels by Deep Convolutional Neural Network
Figure 2 for Seizure Detection using Least EEG Channels by Deep Convolutional Neural Network
Figure 3 for Seizure Detection using Least EEG Channels by Deep Convolutional Neural Network
Figure 4 for Seizure Detection using Least EEG Channels by Deep Convolutional Neural Network

This work aims to develop an end-to-end solution for seizure onset detection. We design the SeizNet, a Convolutional Neural Network for seizure detection. To compare SeizNet with traditional machine learning approach, a baseline classifier is implemented using spectrum band power features with Support Vector Machines (BPsvm). We explore the possibility to use the least number of channels for accurate seizure detection by evaluating SeizNet and BPsvm approaches using all channels and two channels settings respectively. EEG Data is acquired from 29 pediatric patients admitted to KK Woman's and Children's Hospital who were diagnosed as typical absence seizures. We conduct leave-one-out cross validation for all subjects. Using full channel data, BPsvm yields a sensitivity of 86.6\% and 0.84 false alarm (per hour) while SeizNet yields overall sensitivity of 95.8 \% with 0.17 false alarm. More interestingly, two channels seizNet outperforms full channel BPsvm with a sensitivity of 93.3\% and 0.58 false alarm. We further investigate interpretability of SeizNet by decoding the filters learned along convolutional layers. Seizure-like characteristics can be clearly observed in the filters from third and forth convolutional layers.

Viaarxiv icon