Alert button
Picture for Jyotirmoy V. Deshmukh

Jyotirmoy V. Deshmukh

Alert button

Data-Driven Reachability Analysis of Stochastic Dynamical Systems with Conformal Inference

Sep 17, 2023
Navid Hashemi, Xin Qin, Lars Lindemann, Jyotirmoy V. Deshmukh

We consider data-driven reachability analysis of discrete-time stochastic dynamical systems using conformal inference. We assume that we are not provided with a symbolic representation of the stochastic system, but instead have access to a dataset of $K$-step trajectories. The reachability problem is to construct a probabilistic flowpipe such that the probability that a $K$-step trajectory can violate the bounds of the flowpipe does not exceed a user-specified failure probability threshold. The key ideas in this paper are: (1) to learn a surrogate predictor model from data, (2) to perform reachability analysis using the surrogate model, and (3) to quantify the surrogate model's incurred error using conformal inference in order to give probabilistic reachability guarantees. We focus on learning-enabled control systems with complex closed-loop dynamics that are difficult to model symbolically, but where state transition pairs can be queried, e.g., using a simulator. We demonstrate the applicability of our method on examples from the domain of learning-enabled cyber-physical systems.

Viaarxiv icon

Convex Optimization-based Policy Adaptation to Compensate for Distributional Shifts

Apr 05, 2023
Navid Hashemi, Justin Ruths, Jyotirmoy V. Deshmukh

Figure 1 for Convex Optimization-based Policy Adaptation to Compensate for Distributional Shifts
Figure 2 for Convex Optimization-based Policy Adaptation to Compensate for Distributional Shifts
Figure 3 for Convex Optimization-based Policy Adaptation to Compensate for Distributional Shifts

Many real-world systems often involve physical components or operating environments with highly nonlinear and uncertain dynamics. A number of different control algorithms can be used to design optimal controllers for such systems, assuming a reasonably high-fidelity model of the actual system. However, the assumptions made on the stochastic dynamics of the model when designing the optimal controller may no longer be valid when the system is deployed in the real-world. The problem addressed by this paper is the following: Suppose we obtain an optimal trajectory by solving a control problem in the training environment, how do we ensure that the real-world system trajectory tracks this optimal trajectory with minimal amount of error in a deployment environment. In other words, we want to learn how we can adapt an optimal trained policy to distribution shifts in the environment. Distribution shifts are problematic in safety-critical systems, where a trained policy may lead to unsafe outcomes during deployment. We show that this problem can be cast as a nonlinear optimization problem that could be solved using heuristic method such as particle swarm optimization (PSO). However, if we instead consider a convex relaxation of this problem, we can learn policies that track the optimal trajectory with much better error performance, and faster computation times. We demonstrate the efficacy of our approach on tracking an optimal path using a Dubin's car model, and collision avoidance using both a linear and nonlinear model for adaptive cruise control.

Viaarxiv icon

Multi Agent Path Finding using Evolutionary Game Theory

Dec 05, 2022
Sheryl Paul, Jyotirmoy V. Deshmukh

Figure 1 for Multi Agent Path Finding using Evolutionary Game Theory
Figure 2 for Multi Agent Path Finding using Evolutionary Game Theory
Figure 3 for Multi Agent Path Finding using Evolutionary Game Theory
Figure 4 for Multi Agent Path Finding using Evolutionary Game Theory

In this paper, we consider the problem of path finding for a set of homogeneous and autonomous agents navigating a previously unknown stochastic environment. In our problem setting, each agent attempts to maximize a given utility function while respecting safety properties. Our solution is based on ideas from evolutionary game theory, namely replicating policies that perform well and diminishing ones that do not. We do a comprehensive comparison with related multiagent planning methods, and show that our technique beats state of the art RL algorithms in minimizing path length by nearly 30% in large spaces. We show that our algorithm is computationally faster than deep RL methods by at least an order of magnitude. We also show that it scales better with an increase in the number of agents as compared to other methods, path planning methods in particular. Lastly, we empirically prove that the policies that we learn are evolutionarily stable and thus impervious to invasion by any other policy.

Viaarxiv icon

Conformal Prediction for STL Runtime Verification

Nov 03, 2022
Lars Lindemann, Xin Qin, Jyotirmoy V. Deshmukh, George J. Pappas

Figure 1 for Conformal Prediction for STL Runtime Verification
Figure 2 for Conformal Prediction for STL Runtime Verification
Figure 3 for Conformal Prediction for STL Runtime Verification
Figure 4 for Conformal Prediction for STL Runtime Verification

We are interested in predicting failures of cyber-physical systems during their operation. Particularly, we consider stochastic systems and signal temporal logic specifications, and we want to calculate the probability that the current system trajectory violates the specification. The paper presents two predictive runtime verification algorithms that predict future system states from the current observed system trajectory. As these predictions may not be accurate, we construct prediction regions that quantify prediction uncertainty by using conformal prediction, a statistical tool for uncertainty quantification. Our first algorithm directly constructs a prediction region for the satisfaction measure of the specification so that we can predict specification violations with a desired confidence. The second algorithm constructs prediction regions for future system states first, and uses these to obtain a prediction region for the satisfaction measure. To the best of our knowledge, these are the first formal guarantees for a predictive runtime verification algorithm that applies to widely used trajectory predictors such as RNNs and LSTMs, while being computationally simple and making no assumptions on the underlying distribution. We present numerical experiments of an F-16 aircraft and a self-driving car.

Viaarxiv icon

Risk-Awareness in Learning Neural Controllers for Temporal Logic Objectives

Oct 14, 2022
Navid Hashemi, Xin Qin, Jyotirmoy V. Deshmukh, Georgios Fainekos, Bardh Hoxha, Danil Prokhorov, Tomoya Yamaguchi

Figure 1 for Risk-Awareness in Learning Neural Controllers for Temporal Logic Objectives
Figure 2 for Risk-Awareness in Learning Neural Controllers for Temporal Logic Objectives
Figure 3 for Risk-Awareness in Learning Neural Controllers for Temporal Logic Objectives
Figure 4 for Risk-Awareness in Learning Neural Controllers for Temporal Logic Objectives

In this paper, we consider the problem of synthesizing a controller in the presence of uncertainty such that the resulting closed-loop system satisfies certain hard constraints while optimizing certain (soft) performance objectives. We assume that the hard constraints encoding safety or mission-critical task objectives are expressed using Signal Temporal Logic (STL), while performance is quantified using standard cost functions on system trajectories. In order to prioritize the satisfaction of the hard STL constraints, we utilize the framework of control barrier functions (CBFs) and algorithmically obtain CBFs for STL objectives. We assume that the controllers are modeled using neural networks (NNs) and provide an optimization algorithm to learn the optimal parameters for the NN controller that optimize the performance at a user-specified robustness margin for the safety specifications. We use the formalism of risk measures to evaluate the risk incurred by the trade-off between robustness margin of the system and its performance. We demonstrate the efficacy of our approach on well-known difficult examples for nonlinear control such as a quad-rotor and a unicycle, where the mission objectives for each system include hard timing constraints and safety objectives.

Viaarxiv icon

Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic

Jul 01, 2022
Sara Mohammadinejad, Jesse Thomason, Jyotirmoy V. Deshmukh

Figure 1 for Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic
Figure 2 for Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic
Figure 3 for Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic
Figure 4 for Interactive Learning from Natural Language and Demonstrations using Signal Temporal Logic

Natural language is an intuitive way for humans to communicate tasks to a robot. While natural language (NL) is ambiguous, real world tasks and their safety requirements need to be communicated unambiguously. Signal Temporal Logic (STL) is a formal logic that can serve as a versatile, expressive, and unambiguous formal language to describe robotic tasks. On one hand, existing work in using STL for the robotics domain typically requires end-users to express task specifications in STL, a challenge for non-expert users. On the other, translating from NL to STL specifications is currently restricted to specific fragments. In this work, we propose DIALOGUESTL, an interactive approach for learning correct and concise STL formulas from (often) ambiguous NL descriptions. We use a combination of semantic parsing, pre-trained transformer-based language models, and user-in-the-loop clarifications aided by a small number of user demonstrations to predict the best STL formula to encode NL task descriptions. An advantage of mapping NL to STL is that there has been considerable recent work on the use of reinforcement learning (RL) to identify control policies for robots. We show we can use Deep Q-Learning techniques to learn optimal policies from the learned STL specifications. We demonstrate that DIALOGUESTL is efficient, scalable, and robust, and has high accuracy in predicting the correct STL formula with a few number of demonstrations and a few interactions with an oracle user.

Viaarxiv icon

Formalizing and Evaluating Requirements of Perception Systems for Automated Vehicles using Spatio-Temporal Perception Logic

Jun 29, 2022
Mohammad Hekmatnejad, Bardh Hoxha, Jyotirmoy V. Deshmukh, Yezhou Yang, Georgios Fainekos

Figure 1 for Formalizing and Evaluating Requirements of Perception Systems for Automated Vehicles using Spatio-Temporal Perception Logic
Figure 2 for Formalizing and Evaluating Requirements of Perception Systems for Automated Vehicles using Spatio-Temporal Perception Logic
Figure 3 for Formalizing and Evaluating Requirements of Perception Systems for Automated Vehicles using Spatio-Temporal Perception Logic
Figure 4 for Formalizing and Evaluating Requirements of Perception Systems for Automated Vehicles using Spatio-Temporal Perception Logic

Automated vehicles (AV) heavily depend on robust perception systems. Current methods for evaluating vision systems focus mainly on frame-by-frame performance. Such evaluation methods appear to be inadequate in assessing the performance of a perception subsystem when used within an AV. In this paper, we present a logic -- referred to as Spatio-Temporal Perception Logic (STPL) -- which utilizes both spatial and temporal modalities. STPL enables reasoning over perception data using spatial and temporal relations. One major advantage of STPL is that it facilitates basic sanity checks on the real-time performance of the perception system, even without ground-truth data in some cases. We identify a fragment of STPL which is efficiently monitorable offline in polynomial time. Finally, we present a range of specifications for AV perception systems to highlight the types of requirements that can be expressed and analyzed through offline monitoring with STPL.

* 27 pages, 12 figures, 4 tables, 5 algorithms, 3 appendixes 
Viaarxiv icon

Learning Performance Graphs from Demonstrations via Task-Based Evaluations

Apr 12, 2022
Aniruddh G. Puranic, Jyotirmoy V. Deshmukh, Stefanos Nikolaidis

Figure 1 for Learning Performance Graphs from Demonstrations via Task-Based Evaluations
Figure 2 for Learning Performance Graphs from Demonstrations via Task-Based Evaluations
Figure 3 for Learning Performance Graphs from Demonstrations via Task-Based Evaluations
Figure 4 for Learning Performance Graphs from Demonstrations via Task-Based Evaluations

In the learning from demonstration (LfD) paradigm, understanding and evaluating the demonstrated behaviors plays a critical role in extracting control policies for robots. Without this knowledge, a robot may infer incorrect reward functions that lead to undesirable or unsafe control policies. Recent work has proposed an LfD framework where a user provides a set of formal task specifications to guide LfD, to address the challenge of reward shaping. However, in this framework, specifications are manually ordered in a performance graph (a partial order that specifies relative importance between the specifications). The main contribution of this paper is an algorithm to learn the performance graph directly from the user-provided demonstrations, and show that the reward functions generated using the learned performance graph generate similar policies to those from manually specified performance graphs. We perform a user study that shows that priorities specified by users on behaviors in a simulated highway driving domain match the automatically inferred performance graph. This establishes that we can accurately evaluate user demonstrations with respect to task specifications without expert criteria.

* 14 pages, 9 figures 
Viaarxiv icon

Non-Markovian Reinforcement Learning using Fractional Dynamics

Jul 29, 2021
Gaurav Gupta, Chenzhong Yin, Jyotirmoy V. Deshmukh, Paul Bogdan

Figure 1 for Non-Markovian Reinforcement Learning using Fractional Dynamics
Figure 2 for Non-Markovian Reinforcement Learning using Fractional Dynamics
Figure 3 for Non-Markovian Reinforcement Learning using Fractional Dynamics

Reinforcement learning (RL) is a technique to learn the control policy for an agent that interacts with a stochastic environment. In any given state, the agent takes some action, and the environment determines the probability distribution over the next state as well as gives the agent some reward. Most RL algorithms typically assume that the environment satisfies Markov assumptions (i.e. the probability distribution over the next state depends only on the current state). In this paper, we propose a model-based RL technique for a system that has non-Markovian dynamics. Such environments are common in many real-world applications such as in human physiology, biological systems, material science, and population dynamics. Model-based RL (MBRL) techniques typically try to simultaneously learn a model of the environment from the data, as well as try to identify an optimal policy for the learned model. We propose a technique where the non-Markovianity of the system is modeled through a fractional dynamical system. We show that we can quantify the difference in the performance of an MBRL algorithm that uses bounded horizon model predictive control from the optimal policy. Finally, we demonstrate our proposed framework on a pharmacokinetic model of human blood glucose dynamics and show that our fractional models can capture distant correlations on real-world datasets.

* 14 pages, 3 figures, CDC2021 
Viaarxiv icon

Learning from Demonstrations using Signal Temporal Logic

Feb 15, 2021
Aniruddh G. Puranic, Jyotirmoy V. Deshmukh, Stefanos Nikolaidis

Figure 1 for Learning from Demonstrations using Signal Temporal Logic
Figure 2 for Learning from Demonstrations using Signal Temporal Logic
Figure 3 for Learning from Demonstrations using Signal Temporal Logic
Figure 4 for Learning from Demonstrations using Signal Temporal Logic

Learning-from-demonstrations is an emerging paradigm to obtain effective robot control policies for complex tasks via reinforcement learning without the need to explicitly design reward functions. However, it is susceptible to imperfections in demonstrations and also raises concerns of safety and interpretability in the learned control policies. To address these issues, we use Signal Temporal Logic to evaluate and rank the quality of demonstrations. Temporal logic-based specifications allow us to create non-Markovian rewards, and also define interesting causal dependencies between tasks such as sequential task specifications. We validate our approach through experiments on discrete-world and OpenAI Gym environments, and show that our approach outperforms the state-of-the-art Maximum Causal Entropy Inverse Reinforcement Learning.

* Published at Conference on Robot Learning (CoRL) 2020 
Viaarxiv icon