Alert button
Picture for Chelsea Sidrane

Chelsea Sidrane

Alert button

Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems

Sep 28, 2022
Nicholas Rober, Sydney M. Katz, Chelsea Sidrane, Esen Yel, Michael Everett, Mykel J. Kochenderfer, Jonathan P. How

Figure 1 for Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems
Figure 2 for Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems
Figure 3 for Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems
Figure 4 for Backward Reachability Analysis of Neural Feedback Loops: Techniques for Linear and Nonlinear Systems

The increasing prevalence of neural networks (NNs) in safety-critical applications calls for methods to certify safe behavior. This paper presents a backward reachability approach for safety verification of neural feedback loops (NFLs), i.e., closed-loop systems with NN control policies. While recent works have focused on forward reachability as a strategy for safety certification of NFLs, backward reachability offers advantages over the forward strategy, particularly in obstacle avoidance scenarios. Prior works have developed techniques for backward reachability analysis for systems without NNs, but the presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible. To overcome these challenges, we use existing forward NN analysis tools to efficiently find an over-approximation of the backprojection (BP) set, i.e., the set of states for which the NN control policy will drive the system to a given target set. We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs and propose computationally efficient strategies. We use numerical results from a variety of models to showcase the proposed algorithms, including a demonstration of safety certification for a 6D system.

* 14 pages, 14 figures. arXiv admin note: substantial text overlap with arXiv:2204.08319 
Viaarxiv icon

Verifying Inverse Model Neural Networks

Feb 04, 2022
Chelsea Sidrane, Sydney Katz, Anthony Corso, Mykel J. Kochenderfer

Figure 1 for Verifying Inverse Model Neural Networks
Figure 2 for Verifying Inverse Model Neural Networks
Figure 3 for Verifying Inverse Model Neural Networks
Figure 4 for Verifying Inverse Model Neural Networks

Inverse problems exist in a wide variety of physical domains from aerospace engineering to medical imaging. The goal is to infer the underlying state from a set of observations. When the forward model that produced the observations is nonlinear and stochastic, solving the inverse problem is very challenging. Neural networks are an appealing solution for solving inverse problems as they can be trained from noisy data and once trained are computationally efficient to run. However, inverse model neural networks do not have guarantees of correctness built-in, which makes them unreliable for use in safety and accuracy-critical contexts. In this work we introduce a method for verifying the correctness of inverse model neural networks. Our approach is to overapproximate a nonlinear, stochastic forward model with piecewise linear constraints and encode both the overapproximate forward model and the neural network inverse model as a mixed-integer program. We demonstrate this verification procedure on a real-world airplane fuel gauge case study. The ability to verify and consequently trust inverse model neural networks allows their use in a wide variety of contexts, from aerospace to medicine.

Viaarxiv icon

OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems

Aug 03, 2021
Chelsea Sidrane, Amir Maleki, Ahmed Irfan, Mykel J. Kochenderfer

Figure 1 for OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems
Figure 2 for OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems
Figure 3 for OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems
Figure 4 for OVERT: An Algorithm for Safety Verification of Neural Network Control Policies for Nonlinear Systems

Deep learning methods can be used to produce control policies, but certifying their safety is challenging. The resulting networks are nonlinear and often very large. In response to this challenge, we present OVERT: a sound algorithm for safety verification of nonlinear discrete-time closed loop dynamical systems with neural network control policies. The novelty of OVERT lies in combining ideas from the classical formal methods literature with ideas from the newer neural network verification literature. The central concept of OVERT is to abstract nonlinear functions with a set of optimally tight piecewise linear bounds. Such piecewise linear bounds are designed for seamless integration into ReLU neural network verification tools. OVERT can be used to prove bounded-time safety properties by either computing reachable sets or solving feasibility queries directly. We demonstrate various examples of safety verification for several classical benchmark examples. OVERT compares favorably to existing methods both in computation time and in tightness of the reachable set.

* 44 pages, under review 
Viaarxiv icon

Machine Learning for Generalizable Prediction of Flood Susceptibility

Oct 15, 2019
Chelsea Sidrane, Dylan J Fitzpatrick, Andrew Annex, Diane O'Donoghue, Yarin Gal, Piotr Biliński

Figure 1 for Machine Learning for Generalizable Prediction of Flood Susceptibility
Figure 2 for Machine Learning for Generalizable Prediction of Flood Susceptibility
Figure 3 for Machine Learning for Generalizable Prediction of Flood Susceptibility

Flooding is a destructive and dangerous hazard and climate change appears to be increasing the frequency of catastrophic flooding events around the world. Physics-based flood models are costly to calibrate and are rarely generalizable across different river basins, as model outputs are sensitive to site-specific parameters and human-regulated infrastructure. In contrast, statistical models implicitly account for such factors through the data on which they are trained. Such models trained primarily from remotely-sensed Earth observation data could reduce the need for extensive in-situ measurements. In this work, we develop generalizable, multi-basin models of river flooding susceptibility using geographically-distributed data from the USGS stream gauge network. Machine learning models are trained in a supervised framework to predict two measures of flood susceptibility from a mix of river basin attributes, impervious surface cover information derived from satellite imagery, and historical records of rainfall and stream height. We report prediction performance of multiple models using precision-recall curves, and compare with performance of naive baselines. This work on multi-basin flood prediction represents a step in the direction of making flood prediction accessible to all at-risk communities.

* Will be presented at hadri.ai 2019, a workshop at NeurIPS 
Viaarxiv icon

HG-DAgger: Interactive Imitation Learning with Human Experts

Mar 11, 2019
Michael Kelly, Chelsea Sidrane, Katherine Driggs-Campbell, Mykel J. Kochenderfer

Figure 1 for HG-DAgger: Interactive Imitation Learning with Human Experts
Figure 2 for HG-DAgger: Interactive Imitation Learning with Human Experts
Figure 3 for HG-DAgger: Interactive Imitation Learning with Human Experts
Figure 4 for HG-DAgger: Interactive Imitation Learning with Human Experts

Imitation learning has proven to be useful for many real-world problems, but approaches such as behavioral cloning suffer from data mismatch and compounding error issues. One attempt to address these limitations is the DAgger algorithm, which uses the state distribution induced by the novice to sample corrective actions from the expert. Such sampling schemes, however, require the expert to provide action labels without being fully in control of the system. This can decrease safety and, when using humans as experts, is likely to degrade the quality of the collected labels due to perceived actuator lag. In this work, we propose HG-DAgger, a variant of DAgger that is more suitable for interactive imitation learning from human experts in real-world systems. In addition to training a novice policy, HG-DAgger also learns a safety threshold for a model-uncertainty-based risk metric that can be used to predict the performance of the fully trained novice in different regions of the state space. We evaluate our method on both a simulated and real-world autonomous driving task, and demonstrate improved performance over both DAgger and behavioral cloning.

Viaarxiv icon