Alert button
Picture for Oier Mees

Oier Mees

Alert button

Vision-Language Models Provide Promptable Representations for Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 13, 2024
William Chen, Oier Mees, Aviral Kumar, Sergey Levine

Viaarxiv icon

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Add code
Bookmark button
Alert button
Oct 17, 2023
Open X-Embodiment Collaboration, Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky, Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim, Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang, Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Danny Driess, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov, Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra, Ge Yan, Giulio Schiavi, Gregory Kahn, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta, Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jad Abou-Chakra, Jaehyung Kim, Jan Peters, Jan Schneider, Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh, Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch, Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka, Kevin Zhang, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan, Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama, Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf, Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu, Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín, Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore, Shikhar Bahl, Shivin Dass, Shubham Sonawani, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany, Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima, Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke, Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou, Yifeng Zhu, Ying Xu, Yixuan Wang, Yonatan Bisk, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh-Hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li, Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, Zichen Jeff Cui

Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

Audio Visual Language Maps for Robot Navigation

Add code
Bookmark button
Alert button
Mar 27, 2023
Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard

Figure 1 for Audio Visual Language Maps for Robot Navigation
Figure 2 for Audio Visual Language Maps for Robot Navigation
Figure 3 for Audio Visual Language Maps for Robot Navigation
Figure 4 for Audio Visual Language Maps for Robot Navigation
Viaarxiv icon

Visual Language Maps for Robot Navigation

Add code
Bookmark button
Alert button
Oct 17, 2022
Chenguang Huang, Oier Mees, Andy Zeng, Wolfram Burgard

Figure 1 for Visual Language Maps for Robot Navigation
Figure 2 for Visual Language Maps for Robot Navigation
Figure 3 for Visual Language Maps for Robot Navigation
Figure 4 for Visual Language Maps for Robot Navigation
Viaarxiv icon

Grounding Language with Visual Affordances over Unstructured Data

Add code
Bookmark button
Alert button
Oct 10, 2022
Oier Mees, Jessica Borja-Diaz, Wolfram Burgard

Figure 1 for Grounding Language with Visual Affordances over Unstructured Data
Figure 2 for Grounding Language with Visual Affordances over Unstructured Data
Figure 3 for Grounding Language with Visual Affordances over Unstructured Data
Figure 4 for Grounding Language with Visual Affordances over Unstructured Data
Viaarxiv icon

Latent Plans for Task-Agnostic Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 19, 2022
Erick Rosete-Beas, Oier Mees, Gabriel Kalweit, Joschka Boedecker, Wolfram Burgard

Figure 1 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Figure 2 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Figure 3 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Figure 4 for Latent Plans for Task-Agnostic Offline Reinforcement Learning
Viaarxiv icon

What Matters in Language Conditioned Robotic Imitation Learning

Add code
Bookmark button
Alert button
Apr 13, 2022
Oier Mees, Lukas Hermann, Wolfram Burgard

Figure 1 for What Matters in Language Conditioned Robotic Imitation Learning
Figure 2 for What Matters in Language Conditioned Robotic Imitation Learning
Figure 3 for What Matters in Language Conditioned Robotic Imitation Learning
Figure 4 for What Matters in Language Conditioned Robotic Imitation Learning
Viaarxiv icon

Affordance Learning from Play for Sample-Efficient Policy Learning

Add code
Bookmark button
Alert button
Mar 01, 2022
Jessica Borja-Diaz, Oier Mees, Gabriel Kalweit, Lukas Hermann, Joschka Boedecker, Wolfram Burgard

Figure 1 for Affordance Learning from Play for Sample-Efficient Policy Learning
Figure 2 for Affordance Learning from Play for Sample-Efficient Policy Learning
Figure 3 for Affordance Learning from Play for Sample-Efficient Policy Learning
Figure 4 for Affordance Learning from Play for Sample-Efficient Policy Learning
Viaarxiv icon

CALVIN: A Benchmark for Language-conditioned Policy Learning for Long-horizon Robot Manipulation Tasks

Add code
Bookmark button
Alert button
Dec 08, 2021
Oier Mees, Lukas Hermann, Erick Rosete-Beas, Wolfram Burgard

Figure 1 for CALVIN: A Benchmark for Language-conditioned Policy Learning for Long-horizon Robot Manipulation Tasks
Figure 2 for CALVIN: A Benchmark for Language-conditioned Policy Learning for Long-horizon Robot Manipulation Tasks
Figure 3 for CALVIN: A Benchmark for Language-conditioned Policy Learning for Long-horizon Robot Manipulation Tasks
Figure 4 for CALVIN: A Benchmark for Language-conditioned Policy Learning for Long-horizon Robot Manipulation Tasks
Viaarxiv icon

Composing Pick-and-Place Tasks By Grounding Language

Add code
Bookmark button
Alert button
Feb 16, 2021
Oier Mees, Wolfram Burgard

Figure 1 for Composing Pick-and-Place Tasks By Grounding Language
Figure 2 for Composing Pick-and-Place Tasks By Grounding Language
Figure 3 for Composing Pick-and-Place Tasks By Grounding Language
Figure 4 for Composing Pick-and-Place Tasks By Grounding Language
Viaarxiv icon