Alert button
Picture for John Schulman

John Schulman

Alert button

Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark

Mar 29, 2021
Sharada Mohanty, Jyotish Poonganam, Adrien Gaidon, Andrey Kolobov, Blake Wulfe, Dipam Chakraborty, Gražvydas Šemetulskis, João Schapke, Jonas Kubilius, Jurgis Pašukonis, Linas Klimas, Matthew Hausknecht, Patrick MacAlpine, Quang Nhat Tran, Thomas Tumiel, Xiaocheng Tang, Xinwei Chen, Christopher Hesse, Jacob Hilton, William Hebgen Guss, Sahika Genc, John Schulman, Karl Cobbe

Figure 1 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 2 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 3 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 4 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Viaarxiv icon

The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors

Jan 26, 2021
William H. Guss, Mario Ynocente Castro, Sam Devlin, Brandon Houghton, Noboru Sean Kuno, Crissman Loomis, Stephanie Milani, Sharada Mohanty, Keisuke Nakata, Ruslan Salakhutdinov, John Schulman, Shinya Shiroshita, Nicholay Topin, Avinash Ummadisingu, Oriol Vinyals

Figure 1 for The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors
Figure 2 for The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors
Figure 3 for The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors
Figure 4 for The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors
Viaarxiv icon

Scaling Laws for Autoregressive Generative Modeling

Nov 06, 2020
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B. Brown, Prafulla Dhariwal, Scott Gray, Chris Hallacy, Benjamin Mann, Alec Radford, Aditya Ramesh, Nick Ryder, Daniel M. Ziegler, John Schulman, Dario Amodei, Sam McCandlish

Figure 1 for Scaling Laws for Autoregressive Generative Modeling
Figure 2 for Scaling Laws for Autoregressive Generative Modeling
Figure 3 for Scaling Laws for Autoregressive Generative Modeling
Figure 4 for Scaling Laws for Autoregressive Generative Modeling
Viaarxiv icon

Phasic Policy Gradient

Sep 09, 2020
Karl Cobbe, Jacob Hilton, Oleg Klimov, John Schulman

Figure 1 for Phasic Policy Gradient
Figure 2 for Phasic Policy Gradient
Figure 3 for Phasic Policy Gradient
Figure 4 for Phasic Policy Gradient
Viaarxiv icon

Leveraging Procedural Generation to Benchmark Reinforcement Learning

Dec 03, 2019
Karl Cobbe, Christopher Hesse, Jacob Hilton, John Schulman

Figure 1 for Leveraging Procedural Generation to Benchmark Reinforcement Learning
Figure 2 for Leveraging Procedural Generation to Benchmark Reinforcement Learning
Figure 3 for Leveraging Procedural Generation to Benchmark Reinforcement Learning
Figure 4 for Leveraging Procedural Generation to Benchmark Reinforcement Learning
Viaarxiv icon

Policy Gradient Search: Online Planning and Expert Iteration without Search Trees

Apr 07, 2019
Thomas Anthony, Robert Nishihara, Philipp Moritz, Tim Salimans, John Schulman

Figure 1 for Policy Gradient Search: Online Planning and Expert Iteration without Search Trees
Figure 2 for Policy Gradient Search: Online Planning and Expert Iteration without Search Trees
Figure 3 for Policy Gradient Search: Online Planning and Expert Iteration without Search Trees
Viaarxiv icon

Semi-Supervised Learning by Label Gradient Alignment

Feb 06, 2019
Jacob Jackson, John Schulman

Figure 1 for Semi-Supervised Learning by Label Gradient Alignment
Figure 2 for Semi-Supervised Learning by Label Gradient Alignment
Figure 3 for Semi-Supervised Learning by Label Gradient Alignment
Figure 4 for Semi-Supervised Learning by Label Gradient Alignment
Viaarxiv icon

Quantifying Generalization in Reinforcement Learning

Dec 20, 2018
Karl Cobbe, Oleg Klimov, Chris Hesse, Taehoon Kim, John Schulman

Figure 1 for Quantifying Generalization in Reinforcement Learning
Figure 2 for Quantifying Generalization in Reinforcement Learning
Figure 3 for Quantifying Generalization in Reinforcement Learning
Figure 4 for Quantifying Generalization in Reinforcement Learning
Viaarxiv icon