Alert button
Picture for Matthew B. Dwyer

Matthew B. Dwyer

Alert button

PCV: A Point Cloud-Based Network Verifier

Jan 30, 2023
Arup Kumar Sarker, Farzana Yasmin Ahmad, Matthew B. Dwyer

Figure 1 for PCV: A Point Cloud-Based Network Verifier
Figure 2 for PCV: A Point Cloud-Based Network Verifier
Figure 3 for PCV: A Point Cloud-Based Network Verifier
Figure 4 for PCV: A Point Cloud-Based Network Verifier

3D vision with real-time LiDAR-based point cloud data became a vital part of autonomous system research, especially perception and prediction modules use for object classification, segmentation, and detection. Despite their success, point cloud-based network models are vulnerable to multiple adversarial attacks, where the certain factor of changes in the validation set causes significant performance drop in well-trained networks. Most of the existing verifiers work perfectly on 2D convolution. Due to complex architecture, dimension of hyper-parameter, and 3D convolution, no verifiers can perform the basic layer-wise verification. It is difficult to conclude the robustness of a 3D vision model without performing the verification. Because there will be always corner cases and adversarial input that can compromise the model's effectiveness. In this project, we describe a point cloud-based network verifier that successfully deals state of the art 3D classifier PointNet verifies the robustness by generating adversarial inputs. We have used extracted properties from the trained PointNet and changed certain factors for perturbation input. We calculate the impact on model accuracy versus property factor and can test PointNet network's robustness against a small collection of perturbing input states resulting from adversarial attacks like the suggested hybrid reverse signed attack. The experimental results reveal that the resilience property of PointNet is affected by our hybrid reverse signed perturbation strategy

* 11 pages, 12 figures 
Viaarxiv icon

White-box Testing of NLP models with Mask Neuron Coverage

May 10, 2022
Arshdeep Sekhon, Yangfeng Ji, Matthew B. Dwyer, Yanjun Qi

Figure 1 for White-box Testing of NLP models with Mask Neuron Coverage
Figure 2 for White-box Testing of NLP models with Mask Neuron Coverage
Figure 3 for White-box Testing of NLP models with Mask Neuron Coverage
Figure 4 for White-box Testing of NLP models with Mask Neuron Coverage

Recent literature has seen growing interest in using black-box strategies like CheckList for testing the behavior of NLP models. Research on white-box testing has developed a number of methods for evaluating how thoroughly the internal behavior of deep models is tested, but they are not applicable to NLP models. We propose a set of white-box testing methods that are customized for transformer-based NLP models. These include Mask Neuron Coverage (MNCOVER) that measures how thoroughly the attention layers in models are exercised during testing. We show that MNCOVER can refine testing suites generated by CheckList by substantially reduce them in size, for more than 60\% on average, while retaining failing tests -- thereby concentrating the fault detection power of the test suite. Further we show how MNCOVER can be used to guide CheckList input generation, evaluate alternative NLP testing methods, and drive data augmentation to improve accuracy.

* Findings of NAACL 2022  
* Findings of NAACL 2022 submission, 12 pages 
Viaarxiv icon

DNNV: A Framework for Deep Neural Network Verification

May 26, 2021
David Shriver, Sebastian Elbaum, Matthew B. Dwyer

Figure 1 for DNNV: A Framework for Deep Neural Network Verification
Figure 2 for DNNV: A Framework for Deep Neural Network Verification
Figure 3 for DNNV: A Framework for Deep Neural Network Verification
Figure 4 for DNNV: A Framework for Deep Neural Network Verification

Despite the large number of sophisticated deep neural network (DNN) verification algorithms, DNN verifier developers, users, and researchers still face several challenges. First, verifier developers must contend with the rapidly changing DNN field to support new DNN operations and property types. Second, verifier users have the burden of selecting a verifier input format to specify their problem. Due to the many input formats, this decision can greatly restrict the verifiers that a user may run. Finally, researchers face difficulties in re-using benchmarks to evaluate and compare verifiers, due to the large number of input formats required to run different verifiers. Existing benchmarks are rarely in formats supported by verifiers other than the one for which the benchmark was introduced. In this work we present DNNV, a framework for reducing the burden on DNN verifier researchers, developers, and users. DNNV standardizes input and output formats, includes a simple yet expressive DSL for specifying DNN properties, and provides powerful simplification and reduction operations to facilitate the application, development, and comparison of DNN verifiers. We show how DNNV increases the support of verifiers for existing benchmarks from 30% to 74%.

Viaarxiv icon

Distribution-Aware Testing of Neural Networks Using Generative Models

Feb 26, 2021
Swaroopa Dola, Matthew B. Dwyer, Mary Lou Soffa

Figure 1 for Distribution-Aware Testing of Neural Networks Using Generative Models
Figure 2 for Distribution-Aware Testing of Neural Networks Using Generative Models
Figure 3 for Distribution-Aware Testing of Neural Networks Using Generative Models
Figure 4 for Distribution-Aware Testing of Neural Networks Using Generative Models

The reliability of software that has a Deep Neural Network (DNN) as a component is urgently important today given the increasing number of critical applications being deployed with DNNs. The need for reliability raises a need for rigorous testing of the safety and trustworthiness of these systems. In the last few years, there have been a number of research efforts focused on testing DNNs. However the test generation techniques proposed so far lack a check to determine whether the test inputs they are generating are valid, and thus invalid inputs are produced. To illustrate this situation, we explored three recent DNN testing techniques. Using deep generative model based input validation, we show that all the three techniques generate significant number of invalid test inputs. We further analyzed the test coverage achieved by the test inputs generated by the DNN testing techniques and showed how invalid test inputs can falsely inflate test coverage metrics. To overcome the inclusion of invalid inputs in testing, we propose a technique to incorporate the valid input space of the DNN model under test in the test generation process. Our technique uses a deep generative model-based algorithm to generate only valid inputs. Results of our empirical studies show that our technique is effective in eliminating invalid tests and boosting the number of valid test inputs generated.

Viaarxiv icon

Refactoring Neural Networks for Verification

Aug 06, 2019
David Shriver, Dong Xu, Sebastian Elbaum, Matthew B. Dwyer

Figure 1 for Refactoring Neural Networks for Verification
Figure 2 for Refactoring Neural Networks for Verification
Figure 3 for Refactoring Neural Networks for Verification
Figure 4 for Refactoring Neural Networks for Verification

Deep neural networks (DNN) are growing in capability and applicability. Their effectiveness has led to their use in safety critical and autonomous systems, yet there is a dearth of cost-effective methods available for reasoning about the behavior of a DNN. In this paper, we seek to expand the applicability and scalability of existing DNN verification techniques through DNN refactoring. A DNN refactoring defines (a) the transformation of the DNN's architecture, i.e., the number and size of its layers, and (b) the distillation of the learned relationships between the input features and function outputs of the original to train the transformed network. Unlike with traditional code refactoring, DNN refactoring does not guarantee functional equivalence of the two networks, but rather it aims to preserve the accuracy of the original network while producing a simpler network that is amenable to more efficient property verification. We present an automated framework for DNN refactoring, and demonstrate its potential effectiveness through three case studies on networks used in autonomous systems.

Viaarxiv icon