Quadratic programs (QP) subject to multiple time-dependent control barrier function (CBF) based constraints have been used to design safety-critical controllers. However, ensuring the existence of a solution at all times to the QP subject to multiple CBF constraints is non-trivial. We quantify the feasible solution space of the QP in terms of its volume. We introduce a novel feasible space volume monitoring control barrier function that promotes compatibility of barrier functions and, hence, existence of a solution at all times. We show empirically that our approach not only enhances feasibility but also exhibits reduced sensitivity to changes in the hyperparameters such as gains of nominal controller. Finally, paired with a global planner, we evaluate our controller for navigation among humans in the AWS Hospital gazebo environment. The proposed controller is demonstrated to outperform the standard CBF-QP controller in maintaining feasibility.
In this work, we assess the theoretical limitations of determining guaranteed stability and accuracy of neural networks in classification tasks. We consider classical distribution-agnostic framework and algorithms minimising empirical risks and potentially subjected to some weights regularisation. We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks in the above settings is extremely challenging, if at all possible, even when such ideal solutions exist within the given class of neural architectures.
Signal Temporal Logic (STL) has become a popular tool for expressing formal requirements of Cyber-Physical Systems (CPS). The problem of verifying STL properties of neural network-controlled CPS remains a largely unexplored problem. In this paper, we present a model for the verification of Neural Network (NN) controllers for general STL specifications using a custom neural architecture where we map an STL formula into a feed-forward neural network with ReLU activation. In the case where both our plant model and the controller are ReLU-activated neural networks, we reduce the STL verification problem to reachability in ReLU neural networks. We also propose a new approach for neural network controllers with general activation functions; this approach is a sound and complete verification approach based on computing the Lipschitz constant of the closed-loop control system. We demonstrate the practical efficacy of our techniques on a number of examples of learning-enabled control systems.
This study presents a novel approach for modeling and simulating human-vehicle interactions in order to examine the effects of automated driving systems (ADS) on driving performance and driver control workload. Existing driver-ADS interaction studies have relied on simulated or real-world human driver experiments that are limited in providing objective evaluation of the dynamic interactions and control workloads on the driver. Our approach leverages an integrated human model-based active driving system (HuMADS) to simulate the dynamic interaction between the driver model and the haptic-based ADS during a vehicle overtaking task. Two driver arm-steering models were developed for both tense and relaxed human driver conditions and validated against experimental data. We conducted a simulation study to evaluate the effects of three different haptic shared control conditions (based on the presence and type of control conflict) on overtaking task performance and driver workloads. We found that No Conflict shared control scenarios result in improved driving performance and reduced control workloads, while Conflict scenarios result in unsafe maneuvers and increased workloads. These findings, which are consistent with experimental studies, demonstrate the potential for our approach to improving future ADS design for safer driver assistance systems.
In this paper, we consider the problem of synthesizing a controller in the presence of uncertainty such that the resulting closed-loop system satisfies certain hard constraints while optimizing certain (soft) performance objectives. We assume that the hard constraints encoding safety or mission-critical task objectives are expressed using Signal Temporal Logic (STL), while performance is quantified using standard cost functions on system trajectories. In order to prioritize the satisfaction of the hard STL constraints, we utilize the framework of control barrier functions (CBFs) and algorithmically obtain CBFs for STL objectives. We assume that the controllers are modeled using neural networks (NNs) and provide an optimization algorithm to learn the optimal parameters for the NN controller that optimize the performance at a user-specified robustness margin for the safety specifications. We use the formalism of risk measures to evaluate the risk incurred by the trade-off between robustness margin of the system and its performance. We demonstrate the efficacy of our approach on well-known difficult examples for nonlinear control such as a quad-rotor and a unicycle, where the mission objectives for each system include hard timing constraints and safety objectives.
In this paper, we study Stochastic Control Barrier Functions (SCBFs) to enable the design of probabilistic safe real-time controllers in presence of uncertainties and based on noisy measurements. Our goal is to design controllers that bound the probability of a system failure in finite-time to a given desired value. To that end, we first estimate the system states from the noisy measurements using an Extended Kalman filter, and compute confidence intervals on the filtering errors. Then, we account for filtering errors and derive sufficient conditions on the control input based on the estimated states to bound the probability that the real states of the system enter an unsafe region within a finite time interval. We show that these sufficient conditions are linear constraints on the control input, and, hence, they can be used in tractable optimization problems to achieve safety, in addition to other properties like reachability, and stability. Our approach is evaluated using a simulation of a lane-changing scenario on a highway with dense traffic.
Safety is a critical concern for the next generation of autonomy that is likely to rely heavily on deep neural networks for perception and control. Formally verifying the safety and robustness of well-trained DNNs and learning-enabled systems under attacks, model uncertainties, and sensing errors is essential for safe autonomy. This research proposes a framework to repair unsafe DNNs in safety-critical systems with reachability analysis. The repair process is inspired by adversarial training which has demonstrated high effectiveness in improving the safety and robustness of DNNs. Different from traditional adversarial training approaches where adversarial examples are utilized from random attacks and may not be representative of all unsafe behaviors, our repair process uses reachability analysis to compute the exact unsafe regions and identify sufficiently representative examples to enhance the efficacy and efficiency of the adversarial training. The performance of our framework is evaluated on two types of benchmarks without safe models as references. One is a DNN controller for aircraft collision avoidance with access to training data. The other is a rocket lander where our framework can be seamlessly integrated with the well-known deep deterministic policy gradient (DDPG) reinforcement learning algorithm. The experimental results show that our framework can successfully repair all instances on multiple safety specifications with negligible performance degradation. In addition, to increase the computational and memory efficiency of the reachability analysis algorithm, we propose a depth-first-search algorithm that combines an existing exact analysis method with an over-approximation approach based on a new set representation. Experimental results show that our method achieves a five-fold improvement in runtime and a two-fold improvement in memory usage compared to exact analysis.
Deep convolutional neural networks have been widely employed as an effective technique to handle complex and practical problems. However, one of the fundamental problems is the lack of formal methods to analyze their behavior. To address this challenge, we propose an approach to compute the exact reachable sets of a network given an input domain, where the reachable set is represented by the face lattice structure. Besides the computation of reachable sets, our approach is also capable of backtracking to the input domain given an output reachable set. Therefore, a full analysis of a network's behavior can be realized. In addition, an approach for fast analysis is also introduced, which conducts fast computation of reachable sets by considering selected sensitive neurons in each layer. The exact pixel-level reachability analysis method is evaluated on a CNN for the CIFAR10 dataset and compared to related works. The fast analysis method is evaluated over a CNN CIFAR10 dataset and VGG16 architecture for the ImageNet dataset.
Sampling-based methods such as Rapidly-exploring Random Trees (RRTs) have been widely used for generating motion paths for autonomous mobile systems. In this work, we extend time-based RRTs with Control Barrier Functions (CBFs) to generate, safe motion plans in dynamic environments with many pedestrians. Our framework is based upon a human motion prediction model which is well suited for indoor narrow environments. We demonstrate our approach on a high-fidelity model of the Toyota Human Support Robot navigating in narrow corridors. We show in three scenarios that our proposed online method can navigate safely in the presence of moving agents with unknown dynamics.
Autonomous vehicles are complex systems that are challenging to test and debug. A requirements-driven approach to the development process can decrease the resources required to design and test these systems, while simultaneously increasing the reliability. We present a testing framework that uses signal temporal logic (STL), which is a precise and unambiguous requirements language. Our framework evaluates test cases against the STL formulae and additionally uses the requirements to automatically identify test cases that fail to satisfy the requirements. One of the key features of our tool is the support for machine learning (ML) components in the system design, such as deep neural networks. The framework allows evaluation of the control algorithms, including the ML components, and it also includes models of CCD camera, lidar, and radar sensors, as well as the vehicle environment. We use multiple methods to generate test cases, including covering arrays, which is an efficient method to search discrete variable spaces. The resulting test cases can be used to debug the controller design by identifying controller behaviors that do not satisfy requirements. The test cases can also enhance the testing phase of development by identifying critical corner cases that correspond to the limits of the system's allowed behaviors. We present STL requirements for an autonomous vehicle system, which capture both component-level and system-level behaviors. Additionally, we present three driving scenarios and demonstrate how our requirements-driven testing framework can be used to identify critical system behaviors, which can be used to support the development process.