Picture for William Hebgen Guss

William Hebgen Guss

Evaluating Large Language Models Trained on Code

Add code
Jul 14, 2021
Figure 1 for Evaluating Large Language Models Trained on Code
Figure 2 for Evaluating Large Language Models Trained on Code
Figure 3 for Evaluating Large Language Models Trained on Code
Figure 4 for Evaluating Large Language Models Trained on Code
Viaarxiv icon

Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft

Add code
Jun 28, 2021
Figure 1 for Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft
Figure 2 for Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft
Figure 3 for Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft
Figure 4 for Multi-task curriculum learning in a complex, visual, hard-exploration domain: Minecraft
Viaarxiv icon

Towards robust and domain agnostic reinforcement learning competitions

Add code
Jun 07, 2021
Figure 1 for Towards robust and domain agnostic reinforcement learning competitions
Figure 2 for Towards robust and domain agnostic reinforcement learning competitions
Figure 3 for Towards robust and domain agnostic reinforcement learning competitions
Figure 4 for Towards robust and domain agnostic reinforcement learning competitions
Viaarxiv icon

Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark

Add code
Mar 29, 2021
Figure 1 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 2 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 3 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Figure 4 for Measuring Sample Efficiency and Generalization in Reinforcement Learning Benchmarks: NeurIPS 2020 Procgen Benchmark
Viaarxiv icon