Alert button
Picture for Karol Hausman

Karol Hausman

Alert button

RT-1: Robotics Transformer for Real-World Control at Scale

Add code
Bookmark button
Alert button
Dec 13, 2022
Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael Ryoo, Grecia Salazar, Pannag Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Vuong, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, Brianna Zitkovich

Figure 1 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 2 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 3 for RT-1: Robotics Transformer for Real-World Control at Scale
Figure 4 for RT-1: Robotics Transformer for Real-World Control at Scale
Viaarxiv icon

Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models

Add code
Bookmark button
Alert button
Nov 22, 2022
Ted Xiao, Harris Chan, Pierre Sermanet, Ayzaan Wahid, Anthony Brohan, Karol Hausman, Sergey Levine, Jonathan Tompson

Figure 1 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Figure 2 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Figure 3 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Figure 4 for Robotic Skill Acquisition via Instruction Augmentation with Vision-Language Models
Viaarxiv icon

Code as Policies: Language Model Programs for Embodied Control

Add code
Bookmark button
Alert button
Sep 19, 2022
Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, Andy Zeng

Figure 1 for Code as Policies: Language Model Programs for Embodied Control
Figure 2 for Code as Policies: Language Model Programs for Embodied Control
Figure 3 for Code as Policies: Language Model Programs for Embodied Control
Figure 4 for Code as Policies: Language Model Programs for Embodied Control
Viaarxiv icon

Offline Reinforcement Learning at Multiple Frequencies

Add code
Bookmark button
Alert button
Jul 26, 2022
Kaylee Burns, Tianhe Yu, Chelsea Finn, Karol Hausman

Figure 1 for Offline Reinforcement Learning at Multiple Frequencies
Figure 2 for Offline Reinforcement Learning at Multiple Frequencies
Figure 3 for Offline Reinforcement Learning at Multiple Frequencies
Figure 4 for Offline Reinforcement Learning at Multiple Frequencies
Viaarxiv icon

Inner Monologue: Embodied Reasoning through Planning with Language Models

Add code
Bookmark button
Alert button
Jul 12, 2022
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, Brian Ichter

Figure 1 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 2 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 3 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Figure 4 for Inner Monologue: Embodied Reasoning through Planning with Language Models
Viaarxiv icon

Jump-Start Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 05, 2022
Ikechukwu Uchendu, Ted Xiao, Yao Lu, Banghua Zhu, Mengyuan Yan, Joséphine Simon, Matthew Bennice, Chuyuan Fu, Cong Ma, Jiantao Jiao, Sergey Levine, Karol Hausman

Figure 1 for Jump-Start Reinforcement Learning
Figure 2 for Jump-Start Reinforcement Learning
Figure 3 for Jump-Start Reinforcement Learning
Figure 4 for Jump-Start Reinforcement Learning
Viaarxiv icon

Do As I Can, Not As I Say: Grounding Language in Robotic Affordances

Add code
Bookmark button
Alert button
Apr 04, 2022
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alex Herzog, Daniel Ho, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Eric Jang, Rosario Jauregui Ruano, Kyle Jeffrey, Sally Jesmonth, Nikhil J Joshi, Ryan Julian, Dmitry Kalashnikov, Yuheng Kuang, Kuang-Huei Lee, Sergey Levine, Yao Lu, Linda Luu, Carolina Parada, Peter Pastor, Jornell Quiambao, Kanishka Rao, Jarek Rettinghouse, Diego Reyes, Pierre Sermanet, Nicolas Sievers, Clayton Tan, Alexander Toshev, Vincent Vanhoucke, Fei Xia, Ted Xiao, Peng Xu, Sichun Xu, Mengyuan Yan

Figure 1 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 2 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 3 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Figure 4 for Do As I Can, Not As I Say: Grounding Language in Robotic Affordances
Viaarxiv icon

Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 29, 2022
Abhishek Gupta, Corey Lynch, Brandon Kinman, Garrett Peake, Sergey Levine, Karol Hausman

Figure 1 for Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning
Figure 2 for Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning
Figure 3 for Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning
Figure 4 for Demonstration-Bootstrapped Autonomous Practicing via Multi-Task Reinforcement Learning
Viaarxiv icon

How to Leverage Unlabeled Data in Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 03, 2022
Tianhe Yu, Aviral Kumar, Yevgen Chebotar, Karol Hausman, Chelsea Finn, Sergey Levine

Figure 1 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Figure 2 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Figure 3 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Figure 4 for How to Leverage Unlabeled Data in Offline Reinforcement Learning
Viaarxiv icon