Picture for Arunkumar Byravan

Arunkumar Byravan

A Generalist Dynamics Model for Control

Add code
May 18, 2023
Figure 1 for A Generalist Dynamics Model for Control
Figure 2 for A Generalist Dynamics Model for Control
Figure 3 for A Generalist Dynamics Model for Control
Figure 4 for A Generalist Dynamics Model for Control
Viaarxiv icon

Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning

Add code
Apr 26, 2023
Viaarxiv icon

Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains

Add code
Feb 24, 2023
Figure 1 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Figure 2 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Figure 3 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Figure 4 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Viaarxiv icon

NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields

Add code
Oct 10, 2022
Figure 1 for NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields
Figure 2 for NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields
Figure 3 for NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields
Figure 4 for NeRF2Real: Sim2real Transfer of Vision-guided Bipedal Motion Skills using Neural Radiance Fields
Viaarxiv icon

Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach

Add code
Apr 22, 2022
Figure 1 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Figure 2 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Figure 3 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Figure 4 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Viaarxiv icon

The Challenges of Exploration for Offline Reinforcement Learning

Add code
Jan 27, 2022
Viaarxiv icon

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes

Add code
Nov 03, 2021
Figure 1 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 2 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 3 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 4 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Viaarxiv icon

Evaluating model-based planning and planner amortization for continuous control

Add code
Oct 07, 2021
Figure 1 for Evaluating model-based planning and planner amortization for continuous control
Figure 2 for Evaluating model-based planning and planner amortization for continuous control
Figure 3 for Evaluating model-based planning and planner amortization for continuous control
Figure 4 for Evaluating model-based planning and planner amortization for continuous control
Viaarxiv icon

Learning Dynamics Models for Model Predictive Agents

Add code
Sep 29, 2021
Figure 1 for Learning Dynamics Models for Model Predictive Agents
Figure 2 for Learning Dynamics Models for Model Predictive Agents
Figure 3 for Learning Dynamics Models for Model Predictive Agents
Figure 4 for Learning Dynamics Models for Model Predictive Agents
Viaarxiv icon

On Multi-objective Policy Optimization as a Tool for Reinforcement Learning

Add code
Jun 15, 2021
Figure 1 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Figure 2 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Figure 3 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Figure 4 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Viaarxiv icon