Picture for Devesh K. Jha

Devesh K. Jha

Towards Human-Level Learning of Complex Physical Puzzles

Add code
Nov 14, 2020
Figure 1 for Towards Human-Level Learning of Complex Physical Puzzles
Figure 2 for Towards Human-Level Learning of Complex Physical Puzzles
Figure 3 for Towards Human-Level Learning of Complex Physical Puzzles
Figure 4 for Towards Human-Level Learning of Complex Physical Puzzles
Viaarxiv icon

Deep Reactive Planning in Dynamic Environments

Add code
Nov 05, 2020
Figure 1 for Deep Reactive Planning in Dynamic Environments
Figure 2 for Deep Reactive Planning in Dynamic Environments
Figure 3 for Deep Reactive Planning in Dynamic Environments
Figure 4 for Deep Reactive Planning in Dynamic Environments
Viaarxiv icon

Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks

Add code
Jul 22, 2020
Figure 1 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Figure 2 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Figure 3 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Figure 4 for Understanding Multi-Modal Perception Using Behavioral Cloning for Peg-In-a-Hole Insertion Tasks
Viaarxiv icon

CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context

Add code
Mar 26, 2020
Figure 1 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Figure 2 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Figure 3 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Figure 4 for CAZSL: Zero-Shot Regression for Pushing Models by Generalizing Through Context
Viaarxiv icon

Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path

Add code
Mar 03, 2020
Figure 1 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Figure 2 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Figure 3 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Figure 4 for Efficient Exploration in Constrained Environments with Goal-Oriented Reference Path
Viaarxiv icon

Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?

Add code
Mar 03, 2020
Figure 1 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Figure 2 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Figure 3 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Figure 4 for Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Viaarxiv icon

Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements

Add code
Feb 25, 2020
Figure 1 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Figure 2 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Figure 3 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Figure 4 for Model-Based Reinforcement Learning for Physical Systems Without Velocity and Acceleration Measurements
Viaarxiv icon

Multi-label Prediction in Time Series Data using Deep Neural Networks

Add code
Jan 27, 2020
Figure 1 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Figure 2 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Figure 3 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Figure 4 for Multi-label Prediction in Time Series Data using Deep Neural Networks
Viaarxiv icon

Local Policy Optimization for Trajectory-Centric Reinforcement Learning

Add code
Jan 22, 2020
Figure 1 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Figure 2 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Figure 3 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Figure 4 for Local Policy Optimization for Trajectory-Centric Reinforcement Learning
Viaarxiv icon

Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation

Add code
Jul 03, 2019
Figure 1 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Figure 2 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Figure 3 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Figure 4 for Safe Approximate Dynamic Programming Via Kernelized Lipschitz Estimation
Viaarxiv icon