Alert button
Picture for Peter Pastor

Peter Pastor

Alert button

Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators

Add code
Bookmark button
Alert button
May 05, 2023
Alexander Herzog, Kanishka Rao, Karol Hausman, Yao Lu, Paul Wohlhart, Mengyuan Yan, Jessica Lin, Montserrat Gonzalez Arenas, Ted Xiao, Daniel Kappler, Daniel Ho, Jarek Rettinghouse, Yevgen Chebotar, Kuang-Huei Lee, Keerthana Gopalakrishnan, Ryan Julian, Adrian Li, Chuyuan Kelly Fu, Bob Wei, Sangeetha Ramesh, Khem Holden, Kim Kleiven, David Rendleman, Sean Kirmani, Jeff Bingham, Jon Weisz, Ying Xu, Wenlong Lu, Matthew Bennice, Cody Fong, David Do, Jessica Lam, Yunfei Bai, Benjie Holson, Michael Quinlan, Noah Brown, Mrinal Kalakrishnan, Julian Ibarz, Peter Pastor, Sergey Levine

Figure 1 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 2 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 3 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Figure 4 for Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Viaarxiv icon

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

Add code
Bookmark button
Alert button
Apr 04, 2022
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan

Figure 1 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 2 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 3 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 4 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Viaarxiv icon

How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned

Add code
Bookmark button
Alert button
Feb 04, 2021
Julian Ibarz, Jie Tan, Chelsea Finn, Mrinal Kalakrishnan, Peter Pastor, Sergey Levine

Figure 1 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 2 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 3 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Figure 4 for How to Train Your Robot with Deep Reinforcement Learning; Lessons We've Learned
Viaarxiv icon

Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping

Add code
Bookmark button
Alert button
Oct 01, 2019
Cristian Bodnar, Adrian Li, Karol Hausman, Peter Pastor, Mrinal Kalakrishnan

Figure 1 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 2 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 3 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Figure 4 for Quantile QT-Opt for Risk-Aware Vision-Based Robotic Grasping
Viaarxiv icon

Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping

Add code
Bookmark button
Alert button
Apr 15, 2019
Mengyuan Yan, Adrian Li, Mrinal Kalakrishnan, Peter Pastor

Figure 1 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Figure 2 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Figure 3 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Figure 4 for Learning Probabilistic Multi-Modal Actor Models for Vision-Based Robotic Grasping
Viaarxiv icon

QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation

Add code
Bookmark button
Alert button
Nov 28, 2018
Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang, Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, Sergey Levine

Figure 1 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 2 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 3 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Figure 4 for QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation
Viaarxiv icon

End-to-End Learning of Semantic Grasping

Add code
Bookmark button
Alert button
Nov 09, 2017
Eric Jang, Sudheendra Vijayanarasimhan, Peter Pastor, Julian Ibarz, Sergey Levine

Figure 1 for End-to-End Learning of Semantic Grasping
Figure 2 for End-to-End Learning of Semantic Grasping
Figure 3 for End-to-End Learning of Semantic Grasping
Figure 4 for End-to-End Learning of Semantic Grasping
Viaarxiv icon

Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping

Add code
Bookmark button
Alert button
Sep 25, 2017
Konstantinos Bousmalis, Alex Irpan, Paul Wohlhart, Yunfei Bai, Matthew Kelcey, Mrinal Kalakrishnan, Laura Downs, Julian Ibarz, Peter Pastor, Kurt Konolige, Sergey Levine, Vincent Vanhoucke

Figure 1 for Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
Figure 2 for Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
Figure 3 for Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
Figure 4 for Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
Viaarxiv icon

Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection

Add code
Bookmark button
Alert button
Aug 28, 2016
Sergey Levine, Peter Pastor, Alex Krizhevsky, Deirdre Quillen

Figure 1 for Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
Figure 2 for Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
Figure 3 for Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
Figure 4 for Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection
Viaarxiv icon