Picture for Ray Jiang

Ray Jiang

AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning

Add code
Aug 07, 2023
Figure 1 for AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning
Figure 2 for AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning
Figure 3 for AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning
Figure 4 for AlphaStar Unplugged: Large-Scale Offline Reinforcement Learning
Viaarxiv icon

Scaling Goal-based Exploration via Pruning Proto-goals

Add code
Feb 09, 2023
Figure 1 for Scaling Goal-based Exploration via Pruning Proto-goals
Figure 2 for Scaling Goal-based Exploration via Pruning Proto-goals
Figure 3 for Scaling Goal-based Exploration via Pruning Proto-goals
Figure 4 for Scaling Goal-based Exploration via Pruning Proto-goals
Viaarxiv icon

Human-level Atari 200x faster

Add code
Sep 15, 2022
Figure 1 for Human-level Atari 200x faster
Figure 2 for Human-level Atari 200x faster
Figure 3 for Human-level Atari 200x faster
Figure 4 for Human-level Atari 200x faster
Viaarxiv icon

Learning Expected Emphatic Traces for Deep RL

Add code
Jul 12, 2021
Figure 1 for Learning Expected Emphatic Traces for Deep RL
Figure 2 for Learning Expected Emphatic Traces for Deep RL
Figure 3 for Learning Expected Emphatic Traces for Deep RL
Figure 4 for Learning Expected Emphatic Traces for Deep RL
Viaarxiv icon

Emphatic Algorithms for Deep Reinforcement Learning

Add code
Jun 21, 2021
Figure 1 for Emphatic Algorithms for Deep Reinforcement Learning
Figure 2 for Emphatic Algorithms for Deep Reinforcement Learning
Figure 3 for Emphatic Algorithms for Deep Reinforcement Learning
Figure 4 for Emphatic Algorithms for Deep Reinforcement Learning
Viaarxiv icon

Causally Correct Partial Models for Reinforcement Learning

Add code
Feb 07, 2020
Figure 1 for Causally Correct Partial Models for Reinforcement Learning
Figure 2 for Causally Correct Partial Models for Reinforcement Learning
Figure 3 for Causally Correct Partial Models for Reinforcement Learning
Figure 4 for Causally Correct Partial Models for Reinforcement Learning
Viaarxiv icon

Reducing Sentiment Bias in Language Models via Counterfactual Evaluation

Add code
Nov 08, 2019
Figure 1 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Figure 2 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Figure 3 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Figure 4 for Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Viaarxiv icon

Wasserstein Fair Classification

Add code
Jul 28, 2019
Figure 1 for Wasserstein Fair Classification
Figure 2 for Wasserstein Fair Classification
Figure 3 for Wasserstein Fair Classification
Figure 4 for Wasserstein Fair Classification
Viaarxiv icon

Degenerate Feedback Loops in Recommender Systems

Add code
Mar 27, 2019
Figure 1 for Degenerate Feedback Loops in Recommender Systems
Figure 2 for Degenerate Feedback Loops in Recommender Systems
Figure 3 for Degenerate Feedback Loops in Recommender Systems
Figure 4 for Degenerate Feedback Loops in Recommender Systems
Viaarxiv icon

Learning from Delayed Outcomes with Intermediate Observations

Add code
Jul 24, 2018
Figure 1 for Learning from Delayed Outcomes with Intermediate Observations
Figure 2 for Learning from Delayed Outcomes with Intermediate Observations
Figure 3 for Learning from Delayed Outcomes with Intermediate Observations
Figure 4 for Learning from Delayed Outcomes with Intermediate Observations
Viaarxiv icon