Alert button
Picture for Aviral Kumar

Aviral Kumar

Alert button

Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints

Nov 21, 2022
Anikait Singh, Aviral Kumar, Quan Vuong, Yevgen Chebotar, Sergey Levine

Figure 1 for Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints
Figure 2 for Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints
Figure 3 for Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints
Figure 4 for Offline RL With Realistic Datasets: Heteroskedasticity and Support Constraints
Viaarxiv icon

Dual Generator Offline Reinforcement Learning

Nov 02, 2022
Quan Vuong, Aviral Kumar, Sergey Levine, Yevgen Chebotar

Figure 1 for Dual Generator Offline Reinforcement Learning
Figure 2 for Dual Generator Offline Reinforcement Learning
Figure 3 for Dual Generator Offline Reinforcement Learning
Figure 4 for Dual Generator Offline Reinforcement Learning
Viaarxiv icon

Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials

Oct 11, 2022
Aviral Kumar, Anikait Singh, Frederik Ebert, Yanlai Yang, Chelsea Finn, Sergey Levine

Figure 1 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Figure 2 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Figure 3 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Figure 4 for Pre-Training for Robots: Offline RL Enables Learning New Tasks from a Handful of Trials
Viaarxiv icon

Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning

Jul 17, 2022
Homer Walke, Jonathan Yang, Albert Yu, Aviral Kumar, Jedrzej Orbik, Avi Singh, Sergey Levine

Figure 1 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Figure 2 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Figure 3 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Figure 4 for Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning
Viaarxiv icon

When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?

Apr 12, 2022
Aviral Kumar, Joey Hong, Anikait Singh, Sergey Levine

Figure 1 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Figure 2 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Figure 3 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Figure 4 for When Should We Prefer Offline Reinforcement Learning Over Behavioral Cloning?
Viaarxiv icon

Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization

Feb 17, 2022
Brandon Trabucco, Xinyang Geng, Aviral Kumar, Sergey Levine

Figure 1 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Figure 2 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Figure 3 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Figure 4 for Design-Bench: Benchmarks for Data-Driven Offline Model-Based Optimization
Viaarxiv icon

How to Leverage Unlabeled Data in Offline Reinforcement Learning

Feb 03, 2022
Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine

Figure 1 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Figure 2 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Figure 3 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Figure 4 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Viaarxiv icon

DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization

Dec 09, 2021
Aviral Kumar, Rishabh Agarwal, Tengyu Ma, Aaron Courville, George Tucker, Sergey Levine

Figure 1 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Figure 2 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Figure 3 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Figure 4 for DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization
Viaarxiv icon

Data-Driven Offline Optimization For Architecting Hardware Accelerators

Oct 20, 2021
Aviral Kumar, Amir Yazdanbakhsh, Milad Hashemi, Kevin Swersky, Sergey Levine

Figure 1 for Data-Driven Offline Optimization For Architecting Hardware Accelerators
Figure 2 for Data-Driven Offline Optimization For Architecting Hardware Accelerators
Figure 3 for Data-Driven Offline Optimization For Architecting Hardware Accelerators
Figure 4 for Data-Driven Offline Optimization For Architecting Hardware Accelerators
Viaarxiv icon