Picture for Mrinal Kalakrishnan

Mrinal Kalakrishnan

Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation

Add code
Dec 20, 2023
Viaarxiv icon

Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots

Add code
Oct 19, 2023
Figure 1 for Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots
Figure 2 for Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots
Figure 3 for Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots
Figure 4 for Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots
Viaarxiv icon

What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?

Add code
Oct 03, 2023
Figure 1 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 2 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 3 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 4 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Viaarxiv icon

Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators

Add code
May 05, 2023
Figure 1 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 2 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 3 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 4 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Viaarxiv icon

USA-Net: Unified Semantic and Affordance Representations for Robot Memory

Add code
Apr 25, 2023
Figure 1 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Figure 2 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Figure 3 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Figure 4 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Viaarxiv icon

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

Add code
Feb 04, 2021
Figure 1 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 2 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 3 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 4 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Viaarxiv icon

Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data

Add code
May 13, 2020
Figure 1 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 2 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 3 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 4 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Viaarxiv icon

Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping

Add code
Oct 01, 2019
Figure 1 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 2 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 3 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 4 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Viaarxiv icon

Watch, Try, Learn: Meta-Learning from Demonstrations and Reward

Add code
Jun 07, 2019
Figure 1 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 2 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 3 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 4 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Viaarxiv icon

Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping

Add code
Apr 15, 2019
Figure 1 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Figure 2 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Figure 3 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Figure 4 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Viaarxiv icon