Alert button
Picture for Benjamin Eysenbach

Benjamin Eysenbach

Alert button

Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts

Add code
Bookmark button
Alert button
Feb 06, 2023
Amrith Setlur, Don Dennis, Benjamin Eysenbach, Aditi Raghunathan, Chelsea Finn, Virginia Smith, Sergey Levine

Figure 1 for Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts
Figure 2 for Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts
Figure 3 for Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts
Figure 4 for Bitrate-Constrained DRO: Beyond Worst Case Robustness To Unknown Group Shifts
Viaarxiv icon

Learning Options via Compression

Add code
Bookmark button
Alert button
Dec 08, 2022
Yiding Jiang, Evan Zheran Liu, Benjamin Eysenbach, Zico Kolter, Chelsea Finn

Figure 1 for Learning Options via Compression
Figure 2 for Learning Options via Compression
Figure 3 for Learning Options via Compression
Figure 4 for Learning Options via Compression
Viaarxiv icon

Contrastive Value Learning: Implicit Models for Simple Offline RL

Add code
Bookmark button
Alert button
Nov 03, 2022
Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, Jonathan Tompson

Figure 1 for Contrastive Value Learning: Implicit Models for Simple Offline RL
Figure 2 for Contrastive Value Learning: Implicit Models for Simple Offline RL
Figure 3 for Contrastive Value Learning: Implicit Models for Simple Offline RL
Figure 4 for Contrastive Value Learning: Implicit Models for Simple Offline RL
Viaarxiv icon

Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective

Add code
Bookmark button
Alert button
Sep 18, 2022
Raj Ghugare, Homanga Bharadhwaj, Benjamin Eysenbach, Sergey Levine, Ruslan Salakhutdinov

Figure 1 for Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
Figure 2 for Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
Figure 3 for Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
Figure 4 for Simplifying Model-based RL: Learning Representations, Latent-space Models, and Policies with One Objective
Viaarxiv icon

Contrastive Learning as Goal-Conditioned Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 15, 2022
Benjamin Eysenbach, Tianjun Zhang, Ruslan Salakhutdinov, Sergey Levine

Figure 1 for Contrastive Learning as Goal-Conditioned Reinforcement Learning
Figure 2 for Contrastive Learning as Goal-Conditioned Reinforcement Learning
Figure 3 for Contrastive Learning as Goal-Conditioned Reinforcement Learning
Figure 4 for Contrastive Learning as Goal-Conditioned Reinforcement Learning
Viaarxiv icon

Imitating Past Successes can be Very Suboptimal

Add code
Bookmark button
Alert button
Jun 07, 2022
Benjamin Eysenbach, Soumith Udatha, Sergey Levine, Ruslan Salakhutdinov

Figure 1 for Imitating Past Successes can be Very Suboptimal
Figure 2 for Imitating Past Successes can be Very Suboptimal
Figure 3 for Imitating Past Successes can be Very Suboptimal
Figure 4 for Imitating Past Successes can be Very Suboptimal
Viaarxiv icon

Adversarial Unlearning: Reducing Confidence Along Adversarial Directions

Add code
Bookmark button
Alert button
Jun 03, 2022
Amrith Setlur, Benjamin Eysenbach, Virginia Smith, Sergey Levine

Figure 1 for Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Figure 2 for Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Figure 3 for Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Figure 4 for Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Viaarxiv icon

RvS: What is Essential for Offline RL via Supervised Learning?

Add code
Bookmark button
Alert button
Dec 20, 2021
Scott Emmons, Benjamin Eysenbach, Ilya Kostrikov, Sergey Levine

Figure 1 for RvS: What is Essential for Offline RL via Supervised Learning?
Figure 2 for RvS: What is Essential for Offline RL via Supervised Learning?
Figure 3 for RvS: What is Essential for Offline RL via Supervised Learning?
Figure 4 for RvS: What is Essential for Offline RL via Supervised Learning?
Viaarxiv icon

C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks

Add code
Bookmark button
Alert button
Oct 22, 2021
Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, Joseph E. Gonzalez

Figure 1 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Figure 2 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Figure 3 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Figure 4 for C-Planning: An Automatic Curriculum for Learning Goal-Reaching Tasks
Viaarxiv icon

Recurrent Model-Free RL is a Strong Baseline for Many POMDPs

Add code
Bookmark button
Alert button
Oct 11, 2021
Tianwei Ni, Benjamin Eysenbach, Ruslan Salakhutdinov

Figure 1 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Figure 2 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Figure 3 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Figure 4 for Recurrent Model-Free RL is a Strong Baseline for Many POMDPs
Viaarxiv icon