Alert button
Picture for Ravi Mangal

Ravi Mangal

Alert button

Carnegie Mellon University

Assumption Generation for the Verification of Learning-Enabled Autonomous Systems

May 27, 2023
Corina Pasareanu, Ravi Mangal, Divya Gopinath, Huafeng Yu

Figure 1 for Assumption Generation for the Verification of Learning-Enabled Autonomous Systems
Figure 2 for Assumption Generation for the Verification of Learning-Enabled Autonomous Systems
Figure 3 for Assumption Generation for the Verification of Learning-Enabled Autonomous Systems
Figure 4 for Assumption Generation for the Verification of Learning-Enabled Autonomous Systems

Providing safety guarantees for autonomous systems is difficult as these systems operate in complex environments that require the use of learning-enabled components, such as deep neural networks (DNNs) for visual perception. DNNs are hard to analyze due to their size (they can have thousands or millions of parameters), lack of formal specifications (DNNs are typically learnt from labeled data, in the absence of any formal requirements), and sensitivity to small changes in the environment. We present an assume-guarantee style compositional approach for the formal verification of system-level safety properties of such autonomous systems. Our insight is that we can analyze the system in the absence of the DNN perception components by automatically synthesizing assumptions on the DNN behaviour that guarantee the satisfaction of the required safety properties. The synthesized assumptions are the weakest in the sense that they characterize the output sequences of all the possible DNNs that, plugged into the autonomous system, guarantee the required safety properties. The assumptions can be leveraged as run-time monitors over a deployed DNN to guarantee the safety of the overall system; they can also be mined to extract local specifications for use during training and testing of DNNs. We illustrate our approach on a case study taken from the autonomous airplanes domain that uses a complex DNN for perception.

Viaarxiv icon

Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study

Feb 06, 2023
Corina S. Pasareanu, Ravi Mangal, Divya Gopinath, Sinem Getir Yaman, Calum Imrie, Radu Calinescu, Huafeng Yu

Figure 1 for Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study
Figure 2 for Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study
Figure 3 for Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study
Figure 4 for Closed-loop Analysis of Vision-based Autonomous Systems: A Case Study

Deep neural networks (DNNs) are increasingly used in safety-critical autonomous systems as perception components processing high-dimensional image data. Formal analysis of these systems is particularly challenging due to the complexity of the perception DNNs, the sensors (cameras), and the environment conditions. We present a case study applying formal probabilistic analysis techniques to an experimental autonomous system that guides airplanes on taxiways using a perception DNN. We address the above challenges by replacing the camera and the network with a compact probabilistic abstraction built from the confusion matrices computed for the DNN on a representative image data set. We also show how to leverage local, DNN-specific analyses as run-time guards to increase the safety of the overall system. Our findings are applicable to other autonomous systems that use complex DNNs for perception.

Viaarxiv icon

On the Perils of Cascading Robust Classifiers

Jun 01, 2022
Ravi Mangal, Zifan Wang, Chi Zhang, Klas Leino, Corina Pasareanu, Matt Fredrikson

Figure 1 for On the Perils of Cascading Robust Classifiers
Figure 2 for On the Perils of Cascading Robust Classifiers
Figure 3 for On the Perils of Cascading Robust Classifiers
Figure 4 for On the Perils of Cascading Robust Classifiers

Ensembling certifiably robust neural networks has been shown to be a promising approach for improving the \emph{certified robust accuracy} of neural models. Black-box ensembles that assume only query-access to the constituent models (and their robustness certifiers) during prediction are particularly attractive due to their modular structure. Cascading ensembles are a popular instance of black-box ensembles that appear to improve certified robust accuracies in practice. However, we find that the robustness certifier used by a cascading ensemble is unsound. That is, when a cascading ensemble is certified as locally robust at an input $x$, there can, in fact, be inputs $x'$ in the $\epsilon$-ball centered at $x$, such that the cascade's prediction at $x'$ is different from $x$. We present an alternate black-box ensembling mechanism based on weighted voting which we prove to be sound for robustness certification. Via a thought experiment, we demonstrate that if the constituent classifiers are suitably diverse, voting ensembles can improve certified performance. Our code is available at \url{https://github.com/TristaChi/ensembleKW}.

Viaarxiv icon

Discrete-Event Controller Synthesis for Autonomous Systems with Deep-Learning Perception Components

Feb 07, 2022
Radu Calinescu, Calum Imrie, Ravi Mangal, Corina Păsăreanu, Misael Alpizar Santana, Gricel Vázquez

Figure 1 for Discrete-Event Controller Synthesis for Autonomous Systems with Deep-Learning Perception Components
Figure 2 for Discrete-Event Controller Synthesis for Autonomous Systems with Deep-Learning Perception Components
Figure 3 for Discrete-Event Controller Synthesis for Autonomous Systems with Deep-Learning Perception Components
Figure 4 for Discrete-Event Controller Synthesis for Autonomous Systems with Deep-Learning Perception Components

We present DEEPDECS, a new method for the synthesis of correct-by-construction discrete-event controllers for autonomous systems that use deep neural network (DNN) classifiers for the perception step of their decision-making processes. Despite major advances in deep learning in recent years, providing safety guarantees for these systems remains very challenging. Our controller synthesis method addresses this challenge by integrating DNN verification with the synthesis of verified Markov models. The synthesised models correspond to discrete-event controllers guaranteed to satisfy the safety, dependability and performance requirements of the autonomous system, and to be Pareto optimal with respect to a set of optimisation criteria. We use the method in simulation to synthesise controllers for mobile-robot collision avoidance, and for maintaining driver attentiveness in shared-control autonomous driving.

* 18 pages 6 Figures 2 Tables 
Viaarxiv icon

Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair

Jul 23, 2021
Klas Leino, Aymeric Fromherz, Ravi Mangal, Matt Fredrikson, Bryan Parno, Corina Păsăreanu

Figure 1 for Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair
Figure 2 for Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair
Figure 3 for Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair
Figure 4 for Self-Repairing Neural Networks: Provable Safety for Deep Networks via Dynamic Repair

Neural networks are increasingly being deployed in contexts where safety is a critical concern. In this work, we propose a way to construct neural network classifiers that dynamically repair violations of non-relational safety constraints called safe ordering properties. Safe ordering properties relate requirements on the ordering of a network's output indices to conditions on their input, and are sufficient to express most useful notions of non-relational safety for classifiers. Our approach is based on a novel self-repairing layer, which provably yields safe outputs regardless of the characteristics of its input. We compose this layer with an existing network to construct a self-repairing network (SR-Net), and show that in addition to providing safe outputs, the SR-Net is guaranteed to preserve the accuracy of the original network. Notably, our approach is independent of the size and architecture of the network being repaired, depending only on the specified property and the dimension of the network's output; thus it is scalable to large state-of-the-art networks. We show that our approach can be implemented using vectorized computations that execute efficiently on a GPU, introducing run-time overhead of less than one millisecond on current hardware -- even on large, widely-used networks containing hundreds of thousands of neurons and millions of parameters.

Viaarxiv icon

Robustness of Neural Networks: A Probabilistic and Practical Approach

Feb 15, 2019
Ravi Mangal, Aditya V. Nori, Alessandro Orso

Neural networks are becoming increasingly prevalent in software, and it is therefore important to be able to verify their behavior. Because verifying the correctness of neural networks is extremely challenging, it is common to focus on the verification of other properties of these systems. One important property, in particular, is robustness. Most existing definitions of robustness, however, focus on the worst-case scenario where the inputs are adversarial. Such notions of robustness are too strong, and unlikely to be satisfied by-and verifiable for-practical neural networks. Observing that real-world inputs to neural networks are drawn from non-adversarial probability distributions, we propose a novel notion of robustness: probabilistic robustness, which requires the neural network to be robust with at least $(1 - \epsilon)$ probability with respect to the input distribution. This probabilistic approach is practical and provides a principled way of estimating the robustness of a neural network. We also present an algorithm, based on abstract interpretation and importance sampling, for checking whether a neural network is probabilistically robust. Our algorithm uses abstract interpretation to approximate the behavior of a neural network and compute an overapproximation of the input regions that violate robustness. It then uses importance sampling to counter the effect of such overapproximation and compute an accurate estimate of the probability that the neural network violates the robustness property.

* Accepted for publication at ICSE-NIER 2019 
Viaarxiv icon