Alert button
Picture for Dinesh Jayaraman

Dinesh Jayaraman

Alert button

TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching

Add code
Bookmark button
Alert button
May 22, 2023
Yecheng Jason Ma, Kausik Sivakumar, Jason Yan, Osbert Bastani, Dinesh Jayaraman

Figure 1 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Figure 2 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Figure 3 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Figure 4 for TOM: Learning Policy-Aware Models for Model-Based Reinforcement Learning via Transition Occupancy Matching
Viaarxiv icon

Planning Goals for Exploration

Add code
Bookmark button
Alert button
Mar 23, 2023
Edward S. Hu, Richard Chang, Oleh Rybkin, Dinesh Jayaraman

Figure 1 for Planning Goals for Exploration
Figure 2 for Planning Goals for Exploration
Figure 3 for Planning Goals for Exploration
Figure 4 for Planning Goals for Exploration
Viaarxiv icon

Learning a Meta-Controller for Dynamic Grasping

Add code
Bookmark button
Alert button
Feb 16, 2023
Yinsen Jia, Jingxi Xu, Dinesh Jayaraman, Shuran Song

Figure 1 for Learning a Meta-Controller for Dynamic Grasping
Figure 2 for Learning a Meta-Controller for Dynamic Grasping
Figure 3 for Learning a Meta-Controller for Dynamic Grasping
Figure 4 for Learning a Meta-Controller for Dynamic Grasping
Viaarxiv icon

Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning

Add code
Bookmark button
Alert button
Dec 17, 2022
Kun Huang, Edward S. Hu, Dinesh Jayaraman

Figure 1 for Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
Figure 2 for Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
Figure 3 for Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
Figure 4 for Training Robots to Evaluate Robots: Example-Based Interactive Reward Functions for Policy Learning
Viaarxiv icon

Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object Transport

Add code
Bookmark button
Alert button
Oct 28, 2022
Sriram Narayanan, Dinesh Jayaraman, Manmohan Chandraker

Figure 1 for Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object Transport
Figure 2 for Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object Transport
Figure 3 for Long-HOT: A Modular Hierarchical Approach for Long-Horizon Object Transport
Viaarxiv icon

VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training

Add code
Bookmark button
Alert button
Sep 30, 2022
Yecheng Jason Ma, Shagun Sodhani, Dinesh Jayaraman, Osbert Bastani, Vikash Kumar, Amy Zhang

Figure 1 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 2 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 3 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Figure 4 for VIP: Towards Universal Visual Reward and Representation via Value-Implicit Pre-Training
Viaarxiv icon

Vision-based Perimeter Defense via Multiview Pose Estimation

Add code
Bookmark button
Alert button
Sep 25, 2022
Elijah S. Lee, Giuseppe Loianno, Dinesh Jayaraman, Vijay Kumar

Figure 1 for Vision-based Perimeter Defense via Multiview Pose Estimation
Figure 2 for Vision-based Perimeter Defense via Multiview Pose Estimation
Figure 3 for Vision-based Perimeter Defense via Multiview Pose Estimation
Figure 4 for Vision-based Perimeter Defense via Multiview Pose Estimation
Viaarxiv icon

Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming

Add code
Bookmark button
Alert button
Jun 22, 2022
Chuan Wen, Jianing Qian, Jierui Lin, Jiaye Teng, Dinesh Jayaraman, Yang Gao

Figure 1 for Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming
Figure 2 for Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming
Figure 3 for Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming
Figure 4 for Fighting Fire with Fire: Avoiding DNN Shortcuts through Priming
Viaarxiv icon

How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression

Add code
Bookmark button
Alert button
Jun 07, 2022
Yecheng Jason Ma, Jason Yan, Dinesh Jayaraman, Osbert Bastani

Figure 1 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Figure 2 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Figure 3 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Figure 4 for How Far I'll Go: Offline Goal-Conditioned Reinforcement Learning via $f$-Advantage Regression
Viaarxiv icon

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

Add code
Bookmark button
Alert button
Feb 04, 2022
Yecheng Jason Ma, Andrew Shen, Dinesh Jayaraman, Osbert Bastani

Figure 1 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Figure 2 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Figure 3 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Figure 4 for SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching
Viaarxiv icon