Picture for Aviral Kumar

Aviral Kumar

Cal-QL: Calibrated Offline RL Pre-Training for Efficient Online Fine-Tuning

Add code
Mar 09, 2023
Viaarxiv icon

Confidence-Conditioned Value Functions for Offline Reinforcement Learning

Add code
Dec 08, 2022
Viaarxiv icon

Offline Q-Learning on Diverse Multi-Task Data Both Scales And Generalizes

Add code
Nov 28, 2022
Viaarxiv icon

Data-Driven Offline Decision-Making via Invariant Representation Learning

Add code
Nov 25, 2022
Viaarxiv icon

Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints

Add code
Nov 21, 2022
Viaarxiv icon

Dual Generator Offline Reinforcement Learning

Add code
Nov 02, 2022
Viaarxiv icon

Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials

Add code
Oct 11, 2022
Figure 1 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Figure 2 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Figure 3 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Figure 4 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Viaarxiv icon

Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning

Add code
Jul 17, 2022
Figure 1 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Figure 2 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Figure 3 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Figure 4 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Viaarxiv icon

When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?

Add code
Apr 12, 2022
Figure 1 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Figure 2 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Figure 3 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Figure 4 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Viaarxiv icon

Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization

Add code
Feb 17, 2022
Figure 1 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Figure 2 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Figure 3 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Figure 4 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Viaarxiv icon