Alert button
Picture for Gregory Kahn

Gregory Kahn

Alert button

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Oct 17, 2023
Open X-Embodiment Collaboration, Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky, Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim, Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang, Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Danny Driess, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov, Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra, Ge Yan, Giulio Schiavi, Gregory Kahn, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta, Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jad Abou-Chakra, Jaehyung Kim, Jan Peters, Jan Schneider, Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh, Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch, Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka, Kevin Zhang, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan, Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama, Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf, Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu, Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín, Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore, Shikhar Bahl, Shivin Dass, Shubham Sonawani, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany, Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima, Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke, Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou, Yifeng Zhu, Ying Xu, Yixuan Wang, Yonatan Bisk, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh-Hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li, Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, Zichen Jeff Cui

Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Multi-Robot Deep Reinforcement Learning for Mobile Navigation

Jun 24, 2021
Katie Kang, Gregory Kahn, Sergey Levine

Figure 1 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Figure 2 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Figure 3 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Figure 4 for Multi-Robot Deep Reinforcement Learning for Mobile Navigation
Viaarxiv icon

RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models

Apr 12, 2021
Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

Figure 1 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Figure 2 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Figure 3 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Figure 4 for RECON: Rapid Exploration for Open-World Navigation with Latent Goal Models
Viaarxiv icon

ViNG: Learning Open-World Navigation with Visual Goals

Dec 17, 2020
Dhruv Shah, Benjamin Eysenbach, Gregory Kahn, Nicholas Rhinehart, Sergey Levine

Figure 1 for ViNG: Learning Open-World Navigation with Visual Goals
Figure 2 for ViNG: Learning Open-World Navigation with Visual Goals
Figure 3 for ViNG: Learning Open-World Navigation with Visual Goals
Figure 4 for ViNG: Learning Open-World Navigation with Visual Goals
Viaarxiv icon

LaND: Learning to Navigate from Disengagements

Oct 09, 2020
Gregory Kahn, Pieter Abbeel, Sergey Levine

Figure 1 for LaND: Learning to Navigate from Disengagements
Figure 2 for LaND: Learning to Navigate from Disengagements
Figure 3 for LaND: Learning to Navigate from Disengagements
Figure 4 for LaND: Learning to Navigate from Disengagements
Viaarxiv icon

Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads

Apr 23, 2020
Suneel Belkhale, Rachel Li, Gregory Kahn, Rowan McAllister, Roberto Calandra, Sergey Levine

Figure 1 for Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
Figure 2 for Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
Figure 3 for Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
Figure 4 for Model-Based Meta-Reinforcement Learning for Flight with Suspended Payloads
Viaarxiv icon

BADGR: An Autonomous Self-Supervised Learning-Based Navigation System

Feb 13, 2020
Gregory Kahn, Pieter Abbeel, Sergey Levine

Figure 1 for BADGR: An Autonomous Self-Supervised Learning-Based Navigation System
Figure 2 for BADGR: An Autonomous Self-Supervised Learning-Based Navigation System
Figure 3 for BADGR: An Autonomous Self-Supervised Learning-Based Navigation System
Figure 4 for BADGR: An Autonomous Self-Supervised Learning-Based Navigation System
Viaarxiv icon

Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight

Feb 11, 2019
Katie Kang, Suneel Belkhale, Gregory Kahn, Pieter Abbeel, Sergey Levine

Figure 1 for Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight
Figure 2 for Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight
Figure 3 for Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight
Figure 4 for Generalization through Simulation: Integrating Simulated and Real Data into Deep Reinforcement Learning for Vision-Based Autonomous Flight
Viaarxiv icon

Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty

Dec 27, 2018
Rowan McAllister, Gregory Kahn, Jeff Clune, Sergey Levine

Figure 1 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 2 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 3 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Figure 4 for Robustness to Out-of-Distribution Inputs via Task-Aware Generative Uncertainty
Viaarxiv icon