Alert button
Picture for Mrinal Kalakrishnan

Mrinal Kalakrishnan

Alert button

Neural feels with neural fields: Visuo-tactile perception for in-hand manipulation

Dec 20, 2023
Sudharshan Suresh, Haozhi Qi, Tingfan Wu, Taosha Fan, Luis Pineda, Mike Lambeta, Jitendra Malik, Mrinal Kalakrishnan, Roberto Calandra, Michael Kaess, Joseph Ortiz, Mustafa Mukadam

Viaarxiv icon

Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots

Oct 19, 2023
Xavier Puig, Eric Undersander, Andrew Szot, Mikael Dallaire Cote, Tsung-Yen Yang, Ruslan Partsey, Ruta Desai, Alexander William Clegg, Michal Hlavac, So Yeon Min, Vladimír Vondruš, Theophile Gervet, Vincent-Pierre Berges, John M. Turner, Oleksandr Maksymets, Zsolt Kira, Mrinal Kalakrishnan, Jitendra Malik, Devendra Singh Chaplot, Unnat Jain, Dhruv Batra, Akshara Rai, Roozbeh Mottaghi

Viaarxiv icon

What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?

Oct 03, 2023
Sneha Silwal, Karmesh Yadav, Tingfan Wu, Jay Vakil, Arjun Majumdar, Sergio Arnaud, Claire Chen, Vincent-Pierre Berges, Dhruv Batra, Aravind Rajeswaran, Mrinal Kalakrishnan, Franziska Meier, Oleksandr Maksymets

Figure 1 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 2 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 3 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Figure 4 for What do we learn from a large-scale study of pre-trained visual representations in sim and real environments?
Viaarxiv icon

Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators

May 05, 2023
Alexander Herzog, Kanishka Rao, Karol Hausman, Yao Lu, Paul Wohlhart, Mengyuan Yan, Jessica Lin, Montserrat Gonzalez Arenas, Ted Xiao, Daniel Kappler, Daniel Ho, Jarek Rettinghouse, Yevgen Chebotar, Kuang-Huei Lee, Keerthana Gopalakrishnan, Ryan Julian, Adrian Li, Chuyuan Kelly Fu, Bob Wei, Sangeetha Ramesh, Khem Holden, Kim Kleiven, David Rendleman, Sean Kirmani, Jeff Bingham, Jon Weisz, Ying Xu, Wenlong Lu, Matthew Bennice, Cody Fong, David Do, Jessica Lam, Yunfei Bai, Benjie Holson, Michael Quinlan, Noah Brown, Mrinal Kalakrishnan, Julian Ibarz, Peter Pastor, Sergey Levine

Figure 1 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 2 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 3 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 4 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Viaarxiv icon

USA-Net: Unified Semantic and Affordance Representations for Robot Memory

Apr 25, 2023
Benjamin Bolte, Austin Wang, Jimmy Yang, Mustafa Mukadam, Mrinal Kalakrishnan, Chris Paxton

Figure 1 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Figure 2 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Figure 3 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Figure 4 for USA-Net: Unified Semantic and Affordance Representations for Robot Memory
Viaarxiv icon

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

Feb 04, 2021
Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Figure 1 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 2 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 3 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 4 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Viaarxiv icon

Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data

May 13, 2020
Mohi Khansari, Daniel Kappler, Jianlan Luo, Jeff Bingham, Mrinal Kalakrishnan

Figure 1 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 2 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 3 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Figure 4 for Action Image Representation: Learning Scalable Deep Grasping Policies with Zero Real World Data
Viaarxiv icon

Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping

Oct 01, 2019
Cristian Bodnar, Adrian Li, Karol Hausman, Peter Pastor, Mrinal Kalakrishnan

Figure 1 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 2 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 3 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 4 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Viaarxiv icon

Watch, Try, Learn: Meta-Learning from Demonstrations and Reward

Jun 07, 2019
Allan Zhou, Eric Jang, Daniel Kappler, Alex Herzog, Mohi Khansari, Paul Wohlhart, Yunfei Bai, Mrinal Kalakrishnan, Sergey Levine, Chelsea Finn

Figure 1 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 2 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 3 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Figure 4 for Watch, Try, Learn: Meta-Learning from Demonstrations and Reward
Viaarxiv icon