Alert button
Picture for Gabriel Dulac-Arnold

Gabriel Dulac-Arnold

Alert button

Thoth

RoboVQA: Multimodal Long-Horizon Reasoning for Robotics

Nov 01, 2023
Pierre Sermanet, Tianli Ding, Jeffrey Zhao, Fei Xia, Debidatta Dwibedi, Keerthana Gopalakrishnan, Christine Chan, Gabriel Dulac-Arnold, Sharath Maddineni, Nikhil J Joshi, Pete Florence, Wei Han, Robert Baruch, Yao Lu, Suvir Mirchandani, Peng Xu, Pannag Sanketi, Karol Hausman, Izhak Shafran, Brian Ichter, Yuan Cao

Figure 1 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Figure 2 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Figure 3 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Figure 4 for RoboVQA: Multimodal Long-Horizon Reasoning for Robotics
Viaarxiv icon

Barkour: Benchmarking Animal-level Agility with Quadruped Robots

May 24, 2023
Ken Caluwaerts, Atil Iscen, J. Chase Kew, Wenhao Yu, Tingnan Zhang, Daniel Freeman, Kuang-Huei Lee, Lisa Lee, Stefano Saliceti, Vincent Zhuang, Nathan Batchelor, Steven Bohez, Federico Casarini, Jose Enrique Chen, Omar Cortes, Erwin Coumans, Adil Dostmohamed, Gabriel Dulac-Arnold, Alejandro Escontrela, Erik Frey, Roland Hafner, Deepali Jain, Bauyrjan Jyenis, Yuheng Kuang, Edward Lee, Linda Luu, Ofir Nachum, Ken Oslund, Jason Powell, Diego Reyes, Francesco Romano, Feresteh Sadeghi, Ron Sloat, Baruch Tabanpour, Daniel Zheng, Michael Neunert, Raia Hadsell, Nicolas Heess, Francesco Nori, Jeff Seto, Carolina Parada, Vikas Sindhwani, Vincent Vanhoucke, Jie Tan

Figure 1 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Figure 2 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Figure 3 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Figure 4 for Barkour: Benchmarking Animal-level Agility with Quadruped Robots
Viaarxiv icon

Get Back Here: Robust Imitation by Return-to-Distribution Planning

May 02, 2023
Geoffrey Cideron, Baruch Tabanpour, Sebastian Curi, Sertan Girgin, Leonard Hussenot, Gabriel Dulac-Arnold, Matthieu Geist, Olivier Pietquin, Robert Dadashi

Figure 1 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 2 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 3 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 4 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Viaarxiv icon

Investigating the role of model-based learning in exploration and transfer

Feb 08, 2023
Jacob Walker, Eszter Vértes, Yazhe Li, Gabriel Dulac-Arnold, Ankesh Anand, Théophane Weber, Jessica B. Hamrick

Figure 1 for Investigating the role of model-based learning in exploration and transfer
Figure 2 for Investigating the role of model-based learning in exploration and transfer
Figure 3 for Investigating the role of model-based learning in exploration and transfer
Figure 4 for Investigating the role of model-based learning in exploration and transfer
Viaarxiv icon

Learning Reward Functions for Robotic Manipulation by Observing Humans

Nov 16, 2022
Minttu Alakuijala, Gabriel Dulac-Arnold, Julien Mairal, Jean Ponce, Cordelia Schmid

Figure 1 for Learning Reward Functions for Robotic Manipulation by Observing Humans
Figure 2 for Learning Reward Functions for Robotic Manipulation by Observing Humans
Figure 3 for Learning Reward Functions for Robotic Manipulation by Observing Humans
Figure 4 for Learning Reward Functions for Robotic Manipulation by Observing Humans
Viaarxiv icon

C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining

Nov 07, 2022
Alexis Jacq, Manu Orsini, Gabriel Dulac-Arnold, Olivier Pietquin, Matthieu Geist, Olivier Bachem

Figure 1 for C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining
Figure 2 for C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining
Figure 3 for C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining
Figure 4 for C3PO: Learning to Achieve Arbitrary Goals via Massively Entropic Pretraining
Viaarxiv icon

Learning Dynamics Models for Model Predictive Agents

Sep 29, 2021
Michael Lutter, Leonard Hasenclever, Arunkumar Byravan, Gabriel Dulac-Arnold, Piotr Trochim, Nicolas Heess, Josh Merel, Yuval Tassa

Figure 1 for Learning Dynamics Models for Model Predictive Agents
Figure 2 for Learning Dynamics Models for Model Predictive Agents
Figure 3 for Learning Dynamics Models for Model Predictive Agents
Figure 4 for Learning Dynamics Models for Model Predictive Agents
Viaarxiv icon

Residual Reinforcement Learning from Demonstrations

Jun 15, 2021
Minttu Alakuijala, Gabriel Dulac-Arnold, Julien Mairal, Jean Ponce, Cordelia Schmid

Figure 1 for Residual Reinforcement Learning from Demonstrations
Figure 2 for Residual Reinforcement Learning from Demonstrations
Figure 3 for Residual Reinforcement Learning from Demonstrations
Figure 4 for Residual Reinforcement Learning from Demonstrations
Viaarxiv icon

Learning to run a Power Network Challenge: a Retrospective Analysis

Mar 02, 2021
Antoine Marot, Benjamin Donnot, Gabriel Dulac-Arnold, Adrian Kelly, Aïdan O'Sullivan, Jan Viebahn, Mariette Awad, Isabelle Guyon, Patrick Panciatici, Camilo Romero

Figure 1 for Learning to run a Power Network Challenge: a Retrospective Analysis
Figure 2 for Learning to run a Power Network Challenge: a Retrospective Analysis
Figure 3 for Learning to run a Power Network Challenge: a Retrospective Analysis
Figure 4 for Learning to run a Power Network Challenge: a Retrospective Analysis
Viaarxiv icon

A Geometric Perspective on Self-Supervised Policy Adaptation

Nov 14, 2020
Cristian Bodnar, Karol Hausman, Gabriel Dulac-Arnold, Rico Jonschkowski

Figure 1 for A Geometric Perspective on Self-Supervised Policy Adaptation
Figure 2 for A Geometric Perspective on Self-Supervised Policy Adaptation
Figure 3 for A Geometric Perspective on Self-Supervised Policy Adaptation
Figure 4 for A Geometric Perspective on Self-Supervised Policy Adaptation
Viaarxiv icon