Alert button
Picture for Benjamin Eysenbach

Benjamin Eysenbach

Alert button

Shane

ViNG: Learning Open-World Navigation with Visual Goals

Dec 17, 2020
Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

Figure 1 for ViNG: Learning Open-World Navigation with Visual Goals
Figure 2 for ViNG: Learning Open-World Navigation with Visual Goals
Figure 3 for ViNG: Learning Open-World Navigation with Visual Goals
Figure 4 for ViNG: Learning Open-World Navigation with Visual Goals
Viaarxiv icon

C-Learning: Learning to Achieve Goals via Recursive Classification

Nov 17, 2020
Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine

Figure 1 for C-Learning: Learning to Achieve Goals via Recursive Classification
Figure 2 for C-Learning: Learning to Achieve Goals via Recursive Classification
Figure 3 for C-Learning: Learning to Achieve Goals via Recursive Classification
Figure 4 for C-Learning: Learning to Achieve Goals via Recursive Classification
Viaarxiv icon

f-IRL: Inverse Reinforcement Learning via State Marginal Matching

Nov 09, 2020
Tianwei Ni, Harshit Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, Benjamin Eysenbach

Figure 1 for f-IRL: Inverse Reinforcement Learning via State Marginal Matching
Figure 2 for f-IRL: Inverse Reinforcement Learning via State Marginal Matching
Figure 3 for f-IRL: Inverse Reinforcement Learning via State Marginal Matching
Figure 4 for f-IRL: Inverse Reinforcement Learning via State Marginal Matching
Viaarxiv icon

Learning to be Safe: Deep RL with a Safety Critic

Oct 27, 2020
Krishnan Srinivasan, Benjamin Eysenbach, Sehoon Ha, Jie Tan, Chelsea Finn

Figure 1 for Learning to be Safe: Deep RL with a Safety Critic
Figure 2 for Learning to be Safe: Deep RL with a Safety Critic
Figure 3 for Learning to be Safe: Deep RL with a Safety Critic
Figure 4 for Learning to be Safe: Deep RL with a Safety Critic
Viaarxiv icon

Interactive Visualization for Debugging RL

Aug 18, 2020
Shuby Deshpande, Benjamin Eysenbach, Jeff Schneider

Figure 1 for Interactive Visualization for Debugging RL
Figure 2 for Interactive Visualization for Debugging RL
Figure 3 for Interactive Visualization for Debugging RL
Figure 4 for Interactive Visualization for Debugging RL
Viaarxiv icon

Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers

Jun 24, 2020
Benjamin Eysenbach, Swapnil Asawa, Shreyas Chaudhari, Ruslan Salakhutdinov, Sergey Levine

Figure 1 for Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
Figure 2 for Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
Figure 3 for Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
Figure 4 for Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
Viaarxiv icon

Weakly-Supervised Reinforcement Learning for Controllable Behavior

Apr 06, 2020
Lisa Lee, Benjamin Eysenbach, Ruslan Salakhutdinov, Shixiang, Gu, Chelsea Finn

Figure 1 for Weakly-Supervised Reinforcement Learning for Controllable Behavior
Figure 2 for Weakly-Supervised Reinforcement Learning for Controllable Behavior
Figure 3 for Weakly-Supervised Reinforcement Learning for Controllable Behavior
Figure 4 for Weakly-Supervised Reinforcement Learning for Controllable Behavior
Viaarxiv icon

Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement

Feb 25, 2020
Benjamin Eysenbach, Xinyang Geng, Sergey Levine, Ruslan Salakhutdinov

Figure 1 for Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
Figure 2 for Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
Figure 3 for Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
Figure 4 for Rewriting History with Inverse RL: Hindsight Inference for Policy Improvement
Viaarxiv icon

Learning To Reach Goals Without Reinforcement Learning

Dec 13, 2019
Dibya Ghosh, Abhishek Gupta, Justin Fu, Ashwin Reddy, Coline Devin, Benjamin Eysenbach, Sergey Levine

Figure 1 for Learning To Reach Goals Without Reinforcement Learning
Figure 2 for Learning To Reach Goals Without Reinforcement Learning
Figure 3 for Learning To Reach Goals Without Reinforcement Learning
Figure 4 for Learning To Reach Goals Without Reinforcement Learning
Viaarxiv icon