Picture for Abhinav Gupta

Abhinav Gupta

R3M: A Universal Visual Representation for Robot Manipulation

Add code
Mar 23, 2022
Figure 1 for R3M: A Universal Visual Representation for Robot Manipulation
Figure 2 for R3M: A Universal Visual Representation for Robot Manipulation
Figure 3 for R3M: A Universal Visual Representation for Robot Manipulation
Figure 4 for R3M: A Universal Visual Representation for Robot Manipulation
Viaarxiv icon

RB2: Robotic Manipulation Benchmarking with a Twist

Add code
Mar 15, 2022
Figure 1 for RB2: Robotic Manipulation Benchmarking with a Twist
Figure 2 for RB2: Robotic Manipulation Benchmarking with a Twist
Figure 3 for RB2: Robotic Manipulation Benchmarking with a Twist
Figure 4 for RB2: Robotic Manipulation Benchmarking with a Twist
Viaarxiv icon

The Unsurprising Effectiveness of Pre-Trained Vision Models for Control

Add code
Mar 07, 2022
Figure 1 for The Unsurprising Effectiveness of Pre-Trained Vision Models for Control
Figure 2 for The Unsurprising Effectiveness of Pre-Trained Vision Models for Control
Figure 3 for The Unsurprising Effectiveness of Pre-Trained Vision Models for Control
Figure 4 for The Unsurprising Effectiveness of Pre-Trained Vision Models for Control
Viaarxiv icon

Interesting Object, Curious Agent: Learning Task-Agnostic Exploration

Add code
Nov 25, 2021
Figure 1 for Interesting Object, Curious Agent: Learning Task-Agnostic Exploration
Figure 2 for Interesting Object, Curious Agent: Learning Task-Agnostic Exploration
Figure 3 for Interesting Object, Curious Agent: Learning Task-Agnostic Exploration
Figure 4 for Interesting Object, Curious Agent: Learning Task-Agnostic Exploration
Viaarxiv icon

A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation

Add code
Nov 09, 2021
Figure 1 for A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation
Figure 2 for A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation
Figure 3 for A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation
Figure 4 for A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation
Viaarxiv icon

ReSkin: versatile, replaceable, lasting tactile skins

Add code
Oct 29, 2021
Figure 1 for ReSkin: versatile, replaceable, lasting tactile skins
Figure 2 for ReSkin: versatile, replaceable, lasting tactile skins
Figure 3 for ReSkin: versatile, replaceable, lasting tactile skins
Figure 4 for ReSkin: versatile, replaceable, lasting tactile skins
Viaarxiv icon

Dynamic population-based meta-learning for multi-agent communication with natural language

Add code
Oct 27, 2021
Figure 1 for Dynamic population-based meta-learning for multi-agent communication with natural language
Figure 2 for Dynamic population-based meta-learning for multi-agent communication with natural language
Figure 3 for Dynamic population-based meta-learning for multi-agent communication with natural language
Figure 4 for Dynamic population-based meta-learning for multi-agent communication with natural language
Viaarxiv icon

No RL, No Simulation: Learning to Navigate without Navigating

Add code
Oct 22, 2021
Figure 1 for No RL, No Simulation: Learning to Navigate without Navigating
Figure 2 for No RL, No Simulation: Learning to Navigate without Navigating
Figure 3 for No RL, No Simulation: Learning to Navigate without Navigating
Figure 4 for No RL, No Simulation: Learning to Navigate without Navigating
Viaarxiv icon

CORA: Benchmarks, Baselines, and Metrics as a Platform for Continual Reinforcement Learning Agents

Add code
Oct 19, 2021
Viaarxiv icon

Learning Multi-Objective Curricula for Deep Reinforcement Learning

Add code
Oct 06, 2021
Figure 1 for Learning Multi-Objective Curricula for Deep Reinforcement Learning
Figure 2 for Learning Multi-Objective Curricula for Deep Reinforcement Learning
Figure 3 for Learning Multi-Objective Curricula for Deep Reinforcement Learning
Figure 4 for Learning Multi-Objective Curricula for Deep Reinforcement Learning
Viaarxiv icon