Alert button
Picture for Devesh K. Jha

Devesh K. Jha

Alert button

Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks

Add code
Bookmark button
Alert button
Jul 22, 2020
Yifang Liu, Diego Romeres, Devesh K. Jha, Daniel Nikovski

Figure 1 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Figure 2 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Figure 3 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Figure 4 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Viaarxiv icon

CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context

Add code
Bookmark button
Alert button
Mar 26, 2020
Wenyu Zhang, Skyler Seto, Devesh K. Jha

Figure 1 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Figure 2 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Figure 3 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Figure 4 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Viaarxiv icon

Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path

Add code
Bookmark button
Alert button
Mar 03, 2020
Kei Ota, Yoko Sasaki, Devesh K. Jha, Yusuke Yoshiyasu, Asako Kanezaki

Figure 1 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Figure 2 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Figure 3 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Figure 4 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Viaarxiv icon

Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?

Add code
Bookmark button
Alert button
Mar 03, 2020
Kei Ota, Tomoaki Oiki, Devesh K. Jha, Toshisada Mariyama, Daniel Nikovski

Figure 1 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Figure 2 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Figure 3 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Figure 4 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Viaarxiv icon

Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements

Add code
Bookmark button
Alert button
Feb 25, 2020
Alberto Dalla Libera, Diego Romeres, Devesh K. Jha, Bill Yerazunis, Daniel Nikovski

Figure 1 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Figure 2 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Figure 3 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Figure 4 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Viaarxiv icon

Multi-label Prediction in Time Series Data using Deep Neural Networks

Add code
Bookmark button
Alert button
Jan 27, 2020
Wenyu Zhang, Devesh K. Jha, Emil Laftchiev, Daniel Nikovski

Figure 1 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Figure 2 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Figure 3 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Figure 4 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Viaarxiv icon

Local Policy Optimization for Trajectory-Centric Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 22, 2020
Patrik Kolaric, Devesh K. Jha, Arvind U. Raghunathan, Frank L. Lewis, Mouhacine Benosman, Diego Romeres, Daniel Nikovski

Figure 1 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Figure 2 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Figure 3 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Figure 4 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Viaarxiv icon

Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation

Add code
Bookmark button
Alert button
Jul 03, 2019
Ankush Chakrabarty, Devesh K. Jha, Gregery T. Buzzard, Yebin Wang, Kyriakos Vamvoudakis

Figure 1 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Figure 2 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Figure 3 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Figure 4 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Viaarxiv icon

Game Theoretic Optimization via Gradient-based Nikaido-Isoda Function

Add code
Bookmark button
Alert button
May 15, 2019
Arvind U. Raghunathan, Anoop Cherian, Devesh K. Jha

Figure 1 for Game Theoretic Optimization via Gradient-based Nikaido-Isoda Function
Figure 2 for Game Theoretic Optimization via Gradient-based Nikaido-Isoda Function
Figure 3 for Game Theoretic Optimization via Gradient-based Nikaido-Isoda Function
Figure 4 for Game Theoretic Optimization via Gradient-based Nikaido-Isoda Function
Viaarxiv icon

Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 13, 2019
Kei Ota, Devesh K. Jha, Tomoaki Oiki, Mamoru Miura, Takashi Nammoto, Daniel Nikovski, Toshisada Mariyama

Figure 1 for Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning
Figure 2 for Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning
Figure 3 for Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning
Figure 4 for Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning
Viaarxiv icon