Alert button
Picture for Youcheng Sun

Youcheng Sun

Alert button

QNNRepair: Quantized Neural Network Repair

Jun 27, 2023
Xidan Song, Youcheng Sun, Mustafa A. Mustafa, Lucas C. Cordeiro

We present QNNRepair, the first method in the literature for repairing quantized neural networks (QNNs). QNNRepair aims to improve the accuracy of a neural network model after quantization. It accepts the full-precision and weight-quantized neural networks and a repair dataset of passing and failing tests. At first, QNNRepair applies a software fault localization method to identify the neurons that cause performance degradation during neural network quantization. Then, it formulates the repair problem into a linear programming problem of solving neuron weights parameters, which corrects the QNN's performance on failing tests while not compromising its performance on passing tests. We evaluate QNNRepair with widely used neural network architectures such as MobileNetV2, ResNet, and VGGNet on popular datasets, including high-resolution images. We also compare QNNRepair with the state-of-the-art data-free quantization method SQuant. According to the experiment results, we conclude that QNNRepair is effective in improving the quantized model's performance in most cases. Its repaired models have 24% higher accuracy than SQuant's in the independent validation set, especially for the ImageNet dataset.

Viaarxiv icon

A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification

May 24, 2023
Yiannis Charalambous, Norbert Tihanyi, Ridhi Jain, Youcheng Sun, Mohamed Amine Ferrag, Lucas C. Cordeiro

Figure 1 for A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification
Figure 2 for A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification
Figure 3 for A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification
Figure 4 for A New Era in Software Security: Towards Self-Healing Software via Large Language Models and Formal Verification

In this paper we present a novel solution that combines the capabilities of Large Language Models (LLMs) with Formal Verification strategies to verify and automatically repair software vulnerabilities. Initially, we employ Bounded Model Checking (BMC) to locate the software vulnerability and derive a counterexample. The counterexample provides evidence that the system behaves incorrectly or contains a vulnerability. The counterexample that has been detected, along with the source code, are provided to the LLM engine. Our approach involves establishing a specialized prompt language for conducting code debugging and generation to understand the vulnerability's root cause and repair the code. Finally, we use BMC to verify the corrected version of the code generated by the LLM. As a proof of concept, we create ESBMC-AI based on the Efficient SMT-based Context-Bounded Model Checker (ESBMC) and a pre-trained Transformer model, specifically gpt-3.5-turbo, to detect and fix errors in C programs. Our experimentation involved generating a dataset comprising 1000 C code samples, each consisting of 20 to 50 lines of code. Notably, our proposed method achieved an impressive success rate of up to 80% in repairing vulnerable code encompassing buffer overflow and pointer dereference failures. We assert that this automated approach can effectively incorporate into the software development lifecycle's continuous integration and deployment (CI/CD) process.

Viaarxiv icon

AIREPAIR: A Repair Platform for Neural Networks

Nov 24, 2022
Xidan Song, Youcheng Sun, Mustafa A. Mustafa, Lucas Cordeiro

Figure 1 for AIREPAIR: A Repair Platform for Neural Networks
Figure 2 for AIREPAIR: A Repair Platform for Neural Networks

We present AIREPAIR, a platform for repairing neural networks. It features the integration of existing network repair tools. Based on AIREPAIR, one can run different repair methods on the same model, thus enabling the fair comparison of different repair techniques. We evaluate AIREPAIR with three state-of-the-art repair tools on popular deep-learning datasets and models. Our evaluation confirms the utility of AIREPAIR, by comparing and analyzing the results from different repair techniques. A demonstration is available at https://youtu.be/UkKw5neeWhw.

Viaarxiv icon

Safety Analysis of Autonomous Driving Systems Based on Model Learning

Nov 23, 2022
Renjue Li, Tianhang Qin, Pengfei Yang, Cheng-Chao Huang, Youcheng Sun, Lijun Zhang

Figure 1 for Safety Analysis of Autonomous Driving Systems Based on Model Learning
Figure 2 for Safety Analysis of Autonomous Driving Systems Based on Model Learning
Figure 3 for Safety Analysis of Autonomous Driving Systems Based on Model Learning
Figure 4 for Safety Analysis of Autonomous Driving Systems Based on Model Learning

We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build a surrogate model that quantitatively depicts the behaviour of an ADS in the specified traffic scenario. The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee. Furthermore, we explore the safe and the unsafe parameter space of the traffic scenario for driving hazards. We demonstrate the utility of the proposed approach by evaluating safety properties on the state-of-the-art ADS in literature, with a variety of simulated traffic scenarios.

Viaarxiv icon

An Overview of Structural Coverage Metrics for Testing Neural Networks

Aug 05, 2022
Muhammad Usman, Youcheng Sun, Divya Gopinath, Rishi Dange, Luca Manolache, Corina S. Pasareanu

Figure 1 for An Overview of Structural Coverage Metrics for Testing Neural Networks
Figure 2 for An Overview of Structural Coverage Metrics for Testing Neural Networks
Figure 3 for An Overview of Structural Coverage Metrics for Testing Neural Networks
Figure 4 for An Overview of Structural Coverage Metrics for Testing Neural Networks

Deep neural network (DNN) models, including those used in safety-critical domains, need to be thoroughly tested to ensure that they can reliably perform well in different scenarios. In this article, we provide an overview of structural coverage metrics for testing DNN models, including neuron coverage (NC), k-multisection neuron coverage (kMNC), top-k neuron coverage (TKNC), neuron boundary coverage (NBC), strong neuron activation coverage (SNAC) and modified condition/decision coverage (MC/DC). We evaluate the metrics on realistic DNN models used for perception tasks (including LeNet-1, LeNet-4, LeNet-5, and ResNet20) as well as on networks used in autonomy (TaxiNet). We also provide a tool, DNNCov, which can measure the testing coverage for all these metrics. DNNCov outputs an informative coverage report to enable researchers and practitioners to assess the adequacy of DNN testing, compare different coverage measures, and to more conveniently inspect the model's internals during testing.

Viaarxiv icon

VeriFi: Towards Verifiable Federated Unlearning

May 25, 2022
Xiangshan Gao, Xingjun Ma, Jingyi Wang, Youcheng Sun, Bo Li, Shouling Ji, Peng Cheng, Jiming Chen

Figure 1 for VeriFi: Towards Verifiable Federated Unlearning
Figure 2 for VeriFi: Towards Verifiable Federated Unlearning
Figure 3 for VeriFi: Towards Verifiable Federated Unlearning
Figure 4 for VeriFi: Towards Verifiable Federated Unlearning

Federated learning (FL) is a collaborative learning paradigm where participants jointly train a powerful model without sharing their private data. One desirable property for FL is the implementation of the right to be forgotten (RTBF), i.e., a leaving participant has the right to request to delete its private data from the global model. However, unlearning itself may not be enough to implement RTBF unless the unlearning effect can be independently verified, an important aspect that has been overlooked in the current literature. In this paper, we prompt the concept of verifiable federated unlearning, and propose VeriFi, a unified framework integrating federated unlearning and verification that allows systematic analysis of the unlearning and quantification of its effect, with different combinations of multiple unlearning and verification methods. In VeriFi, the leaving participant is granted the right to verify (RTV), that is, the participant notifies the server before leaving, then actively verifies the unlearning effect in the next few communication rounds. The unlearning is done at the server side immediately after receiving the leaving notification, while the verification is done locally by the leaving participant via two steps: marking (injecting carefully-designed markers to fingerprint the leaver) and checking (examining the change of the global model's performance on the markers). Based on VeriFi, we conduct the first systematic and large-scale study for verifiable federated unlearning, considering 7 unlearning methods and 5 verification methods. Particularly, we propose a more efficient and FL-friendly unlearning method, and two more effective and robust non-invasive-verification methods. We extensively evaluate VeriFi on 7 datasets and 4 types of deep learning models. Our analysis establishes important empirical understandings for more trustworthy federated unlearning.

Viaarxiv icon

VPN: Verification of Poisoning in Neural Networks

May 08, 2022
Youcheng Sun, Muhammad Usman, Divya Gopinath, Corina S. Păsăreanu

Figure 1 for VPN: Verification of Poisoning in Neural Networks
Figure 2 for VPN: Verification of Poisoning in Neural Networks
Figure 3 for VPN: Verification of Poisoning in Neural Networks
Figure 4 for VPN: Verification of Poisoning in Neural Networks

Neural networks are successfully used in a variety of applications, many of them having safety and security concerns. As a result researchers have proposed formal verification techniques for verifying neural network properties. While previous efforts have mainly focused on checking local robustness in neural networks, we instead study another neural network security issue, namely data poisoning. In this case an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger in an input causes the trained model to misclassify to some target class. We show how to formulate the check for data poisoning as a property that can be checked with off-the-shelf verification tools, such as Marabou and nneum, where counterexamples of failed checks constitute the triggers. We further show that the discovered triggers are `transferable' from a small model to a larger, better-trained model, allowing us to analyze state-of-the art performant models trained for image classification tasks.

Viaarxiv icon

AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks

Jan 31, 2022
Muhammad Usman, Youcheng Sun, Divya Gopinath, Corina S. Pasareanu

Figure 1 for AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks
Figure 2 for AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks
Figure 3 for AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks
Figure 4 for AntidoteRT: Run-time Detection and Correction of Poison Attacks on Neural Networks

We study backdoor poisoning attacks against image classification networks, whereby an attacker inserts a trigger into a subset of the training data, in such a way that at test time, this trigger causes the classifier to predict some target class. %There are several techniques proposed in the literature that aim to detect the attack but only a few also propose to defend against it, and they typically involve retraining the network which is not always possible in practice. We propose lightweight automated detection and correction techniques against poisoning attacks, which are based on neuron patterns mined from the network using a small set of clean and poisoned test samples with known labels. The patterns built based on the mis-classified samples are used for run-time detection of new poisoned inputs. For correction, we propose an input correction technique that uses a differential analysis to identify the trigger in the detected poisoned images, which is then reset to a neutral color. Our detection and correction are performed at run-time and input level, which is in contrast to most existing work that is focused on offline model-level defenses. We demonstrate that our technique outperforms existing defenses such as NeuralCleanse and STRIP on popular benchmarks such as MNIST, CIFAR-10, and GTSRB against the popular BadNets attack and the more complex DFST attack.

Viaarxiv icon

NNrepair: Constraint-based Repair of Neural Network Classifiers

Mar 23, 2021
Muhammad Usman, Divya Gopinath, Youcheng Sun, Yannic Noller, Corina Pasareanu

Figure 1 for NNrepair: Constraint-based Repair of Neural Network Classifiers
Figure 2 for NNrepair: Constraint-based Repair of Neural Network Classifiers
Figure 3 for NNrepair: Constraint-based Repair of Neural Network Classifiers
Figure 4 for NNrepair: Constraint-based Repair of Neural Network Classifiers

We present NNrepair, a constraint-based technique for repairing neural network classifiers. The technique aims to fix the logic of the network at an intermediate layer or at the last layer. NNrepair first uses fault localization to find potentially faulty network parameters (such as the weights) and then performs repair using constraint solving to apply small modifications to the parameters to remedy the defects. We present novel strategies to enable precise yet efficient repair such as inferring correctness specifications to act as oracles for intermediate layer repair, and generation of experts for each class. We demonstrate the technique in the context of three different scenarios: (1) Improving the overall accuracy of a model, (2) Fixing security vulnerabilities caused by poisoning of training data and (3) Improving the robustness of the network against adversarial attacks. Our evaluation on MNIST and CIFAR-10 models shows that NNrepair can improve the accuracy by 45.56 percentage points on poisoned data and 10.40 percentage points on adversarial data. NNrepair also provides small improvement in the overall accuracy of models, without requiring new data or re-training.

Viaarxiv icon

Compositional Explanations for Image Classifiers

Mar 05, 2021
Hana Chockler, Daniel Kroening, Youcheng Sun

Figure 1 for Compositional Explanations for Image Classifiers
Figure 2 for Compositional Explanations for Image Classifiers
Figure 3 for Compositional Explanations for Image Classifiers
Figure 4 for Compositional Explanations for Image Classifiers

Existing algorithms for explaining the output of image classifiers perform poorly on inputs where the object of interest is partially occluded. We present a novel, black-box algorithm for computing explanations that uses a principled approach based on causal theory. We implement the method in the tool CET (Compositional Explanation Tool). Owing to the compositionality in its algorithm, CET computes explanations that are much more accurate than those generated by the existing explanation tools on images with occlusions and delivers a level of performance comparable to the state of the art when explaining images without occlusions.

Viaarxiv icon