Alert button
Picture for Pieter Abbeel

Pieter Abbeel

Alert button

Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design

Add code
Bookmark button
Alert button
Apr 01, 2022
Kourosh Hakhamaneshi, Marcel Nassar, Mariano Phielipp, Pieter Abbeel, Vladimir Stojanović

Figure 1 for Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design
Figure 2 for Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design
Figure 3 for Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design
Figure 4 for Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design
Viaarxiv icon

Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions

Add code
Bookmark button
Alert button
Mar 28, 2022
Alejandro Escontrela, Xue Bin Peng, Wenhao Yu, Tingnan Zhang, Atil Iscen, Ken Goldberg, Pieter Abbeel

Figure 1 for Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
Figure 2 for Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
Figure 3 for Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
Figure 4 for Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions
Viaarxiv icon

Reinforcement Learning with Action-Free Pre-Training from Videos

Add code
Bookmark button
Alert button
Mar 25, 2022
Younggyo Seo, Kimin Lee, Stephen James, Pieter Abbeel

Figure 1 for Reinforcement Learning with Action-Free Pre-Training from Videos
Figure 2 for Reinforcement Learning with Action-Free Pre-Training from Videos
Figure 3 for Reinforcement Learning with Action-Free Pre-Training from Videos
Figure 4 for Reinforcement Learning with Action-Free Pre-Training from Videos
Viaarxiv icon

Teachable Reinforcement Learning via Advice Distillation

Add code
Bookmark button
Alert button
Mar 19, 2022
Olivia Watkins, Trevor Darrell, Pieter Abbeel, Jacob Andreas, Abhishek Gupta

Figure 1 for Teachable Reinforcement Learning via Advice Distillation
Figure 2 for Teachable Reinforcement Learning via Advice Distillation
Figure 3 for Teachable Reinforcement Learning via Advice Distillation
Figure 4 for Teachable Reinforcement Learning via Advice Distillation
Viaarxiv icon

SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 18, 2022
Jongjin Park, Younggyo Seo, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee

Figure 1 for SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning
Figure 2 for SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning
Figure 3 for SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning
Figure 4 for SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning
Viaarxiv icon

It Takes Four to Tango: Multiagent Selfplay for Automatic Curriculum Generation

Add code
Bookmark button
Alert button
Feb 22, 2022
Yuqing Du, Pieter Abbeel, Aditya Grover

Figure 1 for It Takes Four to Tango: Multiagent Selfplay for Automatic Curriculum Generation
Figure 2 for It Takes Four to Tango: Multiagent Selfplay for Automatic Curriculum Generation
Figure 3 for It Takes Four to Tango: Multiagent Selfplay for Automatic Curriculum Generation
Figure 4 for It Takes Four to Tango: Multiagent Selfplay for Automatic Curriculum Generation
Viaarxiv icon

Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 08, 2022
Denis Yarats, David Brandfonbrener, Hao Liu, Michael Laskin, Pieter Abbeel, Alessandro Lazaric, Lerrel Pinto

Figure 1 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 2 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 3 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 4 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Viaarxiv icon

Bingham Policy Parameterization for 3D Rotations in Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 08, 2022
Stephen James, Pieter Abbeel

Figure 1 for Bingham Policy Parameterization for 3D Rotations in Reinforcement Learning
Figure 2 for Bingham Policy Parameterization for 3D Rotations in Reinforcement Learning
Figure 3 for Bingham Policy Parameterization for 3D Rotations in Reinforcement Learning
Figure 4 for Bingham Policy Parameterization for 3D Rotations in Reinforcement Learning
Viaarxiv icon

CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery

Add code
Bookmark button
Alert button
Feb 01, 2022
Michael Laskin, Hao Liu, Xue Bin Peng, Denis Yarats, Aravind Rajeswaran, Pieter Abbeel

Figure 1 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 2 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 3 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Figure 4 for CIC: Contrastive Intrinsic Control for Unsupervised Skill Discovery
Viaarxiv icon

Explaining Reinforcement Learning Policies through Counterfactual Trajectories

Add code
Bookmark button
Alert button
Jan 29, 2022
Julius Frost, Olivia Watkins, Eric Weiner, Pieter Abbeel, Trevor Darrell, Bryan Plummer, Kate Saenko

Figure 1 for Explaining Reinforcement Learning Policies through Counterfactual Trajectories
Figure 2 for Explaining Reinforcement Learning Policies through Counterfactual Trajectories
Figure 3 for Explaining Reinforcement Learning Policies through Counterfactual Trajectories
Figure 4 for Explaining Reinforcement Learning Policies through Counterfactual Trajectories
Viaarxiv icon