Alert button
Picture for Todor Davchev

Todor Davchev

Alert button

Open X-Embodiment: Robotic Learning Datasets and RT-X Models

Oct 17, 2023
Open X-Embodiment Collaboration, Abhishek Padalkar, Acorn Pooley, Ajinkya Jain, Alex Bewley, Alex Herzog, Alex Irpan, Alexander Khazatsky, Anant Rai, Anikait Singh, Anthony Brohan, Antonin Raffin, Ayzaan Wahid, Ben Burgess-Limerick, Beomjoon Kim, Bernhard Schölkopf, Brian Ichter, Cewu Lu, Charles Xu, Chelsea Finn, Chenfeng Xu, Cheng Chi, Chenguang Huang, Christine Chan, Chuer Pan, Chuyuan Fu, Coline Devin, Danny Driess, Deepak Pathak, Dhruv Shah, Dieter Büchler, Dmitry Kalashnikov, Dorsa Sadigh, Edward Johns, Federico Ceola, Fei Xia, Freek Stulp, Gaoyue Zhou, Gaurav S. Sukhatme, Gautam Salhotra, Ge Yan, Giulio Schiavi, Gregory Kahn, Hao Su, Hao-Shu Fang, Haochen Shi, Heni Ben Amor, Henrik I Christensen, Hiroki Furuta, Homer Walke, Hongjie Fang, Igor Mordatch, Ilija Radosavovic, Isabel Leal, Jacky Liang, Jad Abou-Chakra, Jaehyung Kim, Jan Peters, Jan Schneider, Jasmine Hsu, Jeannette Bohg, Jeffrey Bingham, Jiajun Wu, Jialin Wu, Jianlan Luo, Jiayuan Gu, Jie Tan, Jihoon Oh, Jitendra Malik, Jonathan Tompson, Jonathan Yang, Joseph J. Lim, João Silvério, Junhyek Han, Kanishka Rao, Karl Pertsch, Karol Hausman, Keegan Go, Keerthana Gopalakrishnan, Ken Goldberg, Kendra Byrne, Kenneth Oslund, Kento Kawaharazuka, Kevin Zhang, Krishan Rana, Krishnan Srinivasan, Lawrence Yunliang Chen, Lerrel Pinto, Liam Tan, Lionel Ott, Lisa Lee, Masayoshi Tomizuka, Maximilian Du, Michael Ahn, Mingtong Zhang, Mingyu Ding, Mohan Kumar Srirama, Mohit Sharma, Moo Jin Kim, Naoaki Kanazawa, Nicklas Hansen, Nicolas Heess, Nikhil J Joshi, Niko Suenderhauf, Norman Di Palo, Nur Muhammad Mahi Shafiullah, Oier Mees, Oliver Kroemer, Pannag R Sanketi, Paul Wohlhart, Peng Xu, Pierre Sermanet, Priya Sundaresan, Quan Vuong, Rafael Rafailov, Ran Tian, Ria Doshi, Roberto Martín-Martín, Russell Mendonca, Rutav Shah, Ryan Hoque, Ryan Julian, Samuel Bustamante, Sean Kirmani, Sergey Levine, Sherry Moore, Shikhar Bahl, Shivin Dass, Shubham Sonawani, Shuran Song, Sichun Xu, Siddhant Haldar, Simeon Adebola, Simon Guist, Soroush Nasiriany, Stefan Schaal, Stefan Welker, Stephen Tian, Sudeep Dasari, Suneel Belkhale, Takayuki Osa, Tatsuya Harada, Tatsuya Matsushima, Ted Xiao, Tianhe Yu, Tianli Ding, Todor Davchev, Tony Z. Zhao, Travis Armstrong, Trevor Darrell, Vidhi Jain, Vincent Vanhoucke, Wei Zhan, Wenxuan Zhou, Wolfram Burgard, Xi Chen, Xiaolong Wang, Xinghao Zhu, Xuanlin Li, Yao Lu, Yevgen Chebotar, Yifan Zhou, Yifeng Zhu, Ying Xu, Yixuan Wang, Yonatan Bisk, Yoonyoung Cho, Youngwoon Lee, Yuchen Cui, Yueh-Hua Wu, Yujin Tang, Yuke Zhu, Yunzhu Li, Yusuke Iwasawa, Yutaka Matsuo, Zhuo Xu, Zichen Jeff Cui

Figure 1 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 2 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 3 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Figure 4 for Open X-Embodiment: Robotic Learning Datasets and RT-X Models
Viaarxiv icon

RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation

Aug 31, 2023
Mel Vecerik, Carl Doersch, Yi Yang, Todor Davchev, Yusuf Aytar, Guangyao Zhou, Raia Hadsell, Lourdes Agapito, Jon Scholz

Figure 1 for RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation
Figure 2 for RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation
Figure 3 for RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation
Figure 4 for RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation
Viaarxiv icon

RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation

Jun 20, 2023
Konstantinos Bousmalis, Giulia Vezzani, Dushyant Rao, Coline Devin, Alex X. Lee, Maria Bauza, Todor Davchev, Yuxiang Zhou, Agrim Gupta, Akhil Raju, Antoine Laurens, Claudio Fantacci, Valentin Dalibard, Martina Zambelli, Murilo Martins, Rugile Pevceviciute, Michiel Blokzijl, Misha Denil, Nathan Batchelor, Thomas Lampe, Emilio Parisotto, Konrad Żołna, Scott Reed, Sergio Gómez Colmenarejo, Jon Scholz, Abbas Abdolmaleki, Oliver Groth, Jean-Baptiste Regli, Oleg Sushkov, Tom Rothörl, José Enrique Chen, Yusuf Aytar, Dave Barker, Joy Ortiz, Martin Riedmiller, Jost Tobias Springenberg, Raia Hadsell, Francesco Nori, Nicolas Heess

Viaarxiv icon

Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation

Dec 02, 2021
Todor Davchev, Oleg Sushkov, Jean-Baptiste Regli, Stefan Schaal, Yusuf Aytar, Markus Wulfmeier, Jon Scholz

Figure 1 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Figure 2 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Figure 3 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Figure 4 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Viaarxiv icon

Learning Time-Invariant Reward Functions through Model-Based Inverse Reinforcement Learning

Jul 07, 2021
Todor Davchev, Sarah Bechtle, Subramanian Ramamoorthy, Franziska Meier

Figure 1 for Learning Time-Invariant Reward Functions through Model-Based Inverse Reinforcement Learning
Figure 2 for Learning Time-Invariant Reward Functions through Model-Based Inverse Reinforcement Learning
Figure 3 for Learning Time-Invariant Reward Functions through Model-Based Inverse Reinforcement Learning
Figure 4 for Learning Time-Invariant Reward Functions through Model-Based Inverse Reinforcement Learning
Viaarxiv icon

Model-Based Inverse Reinforcement Learning from Visual Demonstrations

Oct 18, 2020
Neha Das, Sarah Bechtle, Todor Davchev, Dinesh Jayaraman, Akshara Rai, Franziska Meier

Figure 1 for Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Figure 2 for Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Figure 3 for Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Figure 4 for Model-Based Inverse Reinforcement Learning from Visual Demonstrations
Viaarxiv icon

Residual Learning from Demonstration

Aug 18, 2020
Todor Davchev, Kevin Sebastian Luck, Michael Burke, Franziska Meier, Stefan Schaal, Subramanian Ramamoorthy

Figure 1 for Residual Learning from Demonstration
Figure 2 for Residual Learning from Demonstration
Figure 3 for Residual Learning from Demonstration
Figure 4 for Residual Learning from Demonstration
Viaarxiv icon

Learning with Modular Representations for Long-Term Multi-Agent Motion Predictions

Jan 17, 2020
Todor Davchev, Michael Burke, Subramanian Ramamoorthy

Figure 1 for Learning with Modular Representations for Long-Term Multi-Agent Motion Predictions
Figure 2 for Learning with Modular Representations for Long-Term Multi-Agent Motion Predictions
Figure 3 for Learning with Modular Representations for Long-Term Multi-Agent Motion Predictions
Figure 4 for Learning with Modular Representations for Long-Term Multi-Agent Motion Predictions
Viaarxiv icon

Learning Modular Representations for Long-Term Multi-Agent Motion Predictions

Dec 02, 2019
Todor Davchev, Michael Burke, Subramanian Ramamoorthy

Figure 1 for Learning Modular Representations for Long-Term Multi-Agent Motion Predictions
Figure 2 for Learning Modular Representations for Long-Term Multi-Agent Motion Predictions
Figure 3 for Learning Modular Representations for Long-Term Multi-Agent Motion Predictions
Figure 4 for Learning Modular Representations for Long-Term Multi-Agent Motion Predictions
Viaarxiv icon