Alert button
Picture for Jonathan Ho

Jonathan Ho

Alert button

Compression with Flows via Local Bits-Back Coding

Add code
Bookmark button
Alert button
May 21, 2019
Jonathan Ho, Evan Lohn, Pieter Abbeel

Figure 1 for Compression with Flows via Local Bits-Back Coding
Figure 2 for Compression with Flows via Local Bits-Back Coding
Figure 3 for Compression with Flows via Local Bits-Back Coding
Figure 4 for Compression with Flows via Local Bits-Back Coding
Viaarxiv icon

Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables

Add code
Bookmark button
Alert button
May 16, 2019
Friso H. Kingma, Pieter Abbeel, Jonathan Ho

Figure 1 for Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables
Figure 2 for Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables
Figure 3 for Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables
Figure 4 for Bit-Swap: Recursive Bits-Back Coding for Lossless Compression with Hierarchical Latent Variables
Viaarxiv icon

Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design

Add code
Bookmark button
Alert button
Feb 01, 2019
Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, Pieter Abbeel

Figure 1 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Figure 2 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Figure 3 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Figure 4 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Viaarxiv icon

Evolved Policy Gradients

Add code
Bookmark button
Alert button
Apr 29, 2018
Rein Houthooft, Richard Y. Chen, Phillip Isola, Bradly C. Stadie, Filip Wolski, Jonathan Ho, Pieter Abbeel

Figure 1 for Evolved Policy Gradients
Figure 2 for Evolved Policy Gradients
Figure 3 for Evolved Policy Gradients
Figure 4 for Evolved Policy Gradients
Viaarxiv icon

One-Shot Imitation Learning

Add code
Bookmark button
Alert button
Dec 04, 2017
Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, Wojciech Zaremba

Figure 1 for One-Shot Imitation Learning
Figure 2 for One-Shot Imitation Learning
Figure 3 for One-Shot Imitation Learning
Figure 4 for One-Shot Imitation Learning
Viaarxiv icon

Meta Learning Shared Hierarchies

Add code
Bookmark button
Alert button
Oct 26, 2017
Kevin Frans, Jonathan Ho, Xi Chen, Pieter Abbeel, John Schulman

Figure 1 for Meta Learning Shared Hierarchies
Figure 2 for Meta Learning Shared Hierarchies
Figure 3 for Meta Learning Shared Hierarchies
Figure 4 for Meta Learning Shared Hierarchies
Viaarxiv icon

Evolution Strategies as a Scalable Alternative to Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 07, 2017
Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever

Figure 1 for Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Figure 2 for Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Figure 3 for Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Figure 4 for Evolution Strategies as a Scalable Alternative to Reinforcement Learning
Viaarxiv icon

Generative Adversarial Imitation Learning

Add code
Bookmark button
Alert button
Jun 10, 2016
Jonathan Ho, Stefano Ermon

Figure 1 for Generative Adversarial Imitation Learning
Figure 2 for Generative Adversarial Imitation Learning
Figure 3 for Generative Adversarial Imitation Learning
Figure 4 for Generative Adversarial Imitation Learning
Viaarxiv icon

Model-Free Imitation Learning with Policy Optimization

Add code
Bookmark button
Alert button
May 26, 2016
Jonathan Ho, Jayesh K. Gupta, Stefano Ermon

Figure 1 for Model-Free Imitation Learning with Policy Optimization
Figure 2 for Model-Free Imitation Learning with Policy Optimization
Figure 3 for Model-Free Imitation Learning with Policy Optimization
Figure 4 for Model-Free Imitation Learning with Policy Optimization
Viaarxiv icon