Alert button
Picture for Valentin Dalibard

Valentin Dalibard

Alert button

RoboCat: A Self-Improving Foundation Agent for Robotic Manipulation

Add code
Bookmark button
Alert button
Jun 20, 2023
Konstantinos Bousmalis, Giulia Vezzani, Dushyant Rao, Coline Devin, Alex X. Lee, Maria Bauza, Todor Davchev, Yuxiang Zhou, Agrim Gupta, Akhil Raju, Antoine Laurens, Claudio Fantacci, Valentin Dalibard, Martina Zambelli, Murilo Martins, Rugile Pevceviciute, Michiel Blokzijl, Misha Denil, Nathan Batchelor, Thomas Lampe, Emilio Parisotto, Konrad Żołna, Scott Reed, Sergio Gómez Colmenarejo, Jon Scholz, Abbas Abdolmaleki, Oliver Groth, Jean-Baptiste Regli, Oleg Sushkov, Tom Rothörl, José Enrique Chen, Yusuf Aytar, Dave Barker, Joy Ortiz, Martin Riedmiller, Jost Tobias Springenberg, Raia Hadsell, Francesco Nori, Nicolas Heess

Viaarxiv icon

Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization

Add code
Bookmark button
Alert button
Apr 08, 2023
Robert Tjarko Lange, Tom Schaul, Yutian Chen, Chris Lu, Tom Zahavy, Valentin Dalibard, Sebastian Flennerhag

Figure 1 for Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization
Figure 2 for Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization
Figure 3 for Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization
Figure 4 for Discovering Attention-Based Genetic Algorithms via Meta-Black-Box Optimization
Viaarxiv icon

Rapid training of deep neural networks without skip connections or normalization layers using Deep Kernel Shaping

Add code
Bookmark button
Alert button
Oct 05, 2021
James Martens, Andy Ballard, Guillaume Desjardins, Grzegorz Swirszcz, Valentin Dalibard, Jascha Sohl-Dickstein, Samuel S. Schoenholz

Viaarxiv icon

Faster Improvement Rate Population Based Training

Add code
Bookmark button
Alert button
Sep 28, 2021
Valentin Dalibard, Max Jaderberg

Figure 1 for Faster Improvement Rate Population Based Training
Figure 2 for Faster Improvement Rate Population Based Training
Figure 3 for Faster Improvement Rate Population Based Training
Figure 4 for Faster Improvement Rate Population Based Training
Viaarxiv icon

Open-Ended Learning Leads to Generally Capable Agents

Add code
Bookmark button
Alert button
Jul 31, 2021
Open Ended Learning Team, Adam Stooke, Anuj Mahajan, Catarina Barros, Charlie Deck, Jakob Bauer, Jakub Sygnowski, Maja Trebacz, Max Jaderberg, Michael Mathieu, Nat McAleese, Nathalie Bradley-Schmieg, Nathaniel Wong, Nicolas Porcel, Roberta Raileanu, Steph Hughes-Fitt, Valentin Dalibard, Wojciech Marian Czarnecki

Figure 1 for Open-Ended Learning Leads to Generally Capable Agents
Figure 2 for Open-Ended Learning Leads to Generally Capable Agents
Figure 3 for Open-Ended Learning Leads to Generally Capable Agents
Figure 4 for Open-Ended Learning Leads to Generally Capable Agents
Viaarxiv icon

Perception-Prediction-Reaction Agents for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 26, 2020
Adam Stooke, Valentin Dalibard, Siddhant M. Jayakumar, Wojciech M. Czarnecki, Max Jaderberg

Figure 1 for Perception-Prediction-Reaction Agents for Deep Reinforcement Learning
Figure 2 for Perception-Prediction-Reaction Agents for Deep Reinforcement Learning
Figure 3 for Perception-Prediction-Reaction Agents for Deep Reinforcement Learning
Figure 4 for Perception-Prediction-Reaction Agents for Deep Reinforcement Learning
Viaarxiv icon

A Generalized Framework for Population Based Training

Add code
Bookmark button
Alert button
Feb 05, 2019
Ang Li, Ola Spyra, Sagi Perel, Valentin Dalibard, Max Jaderberg, Chenjie Gu, David Budden, Tim Harley, Pramod Gupta

Figure 1 for A Generalized Framework for Population Based Training
Figure 2 for A Generalized Framework for Population Based Training
Figure 3 for A Generalized Framework for Population Based Training
Figure 4 for A Generalized Framework for Population Based Training
Viaarxiv icon

Population Based Training of Neural Networks

Add code
Bookmark button
Alert button
Nov 28, 2017
Max Jaderberg, Valentin Dalibard, Simon Osindero, Wojciech M. Czarnecki, Jeff Donahue, Ali Razavi, Oriol Vinyals, Tim Green, Iain Dunning, Karen Simonyan, Chrisantha Fernando, Koray Kavukcuoglu

Figure 1 for Population Based Training of Neural Networks
Figure 2 for Population Based Training of Neural Networks
Figure 3 for Population Based Training of Neural Networks
Figure 4 for Population Based Training of Neural Networks
Viaarxiv icon

Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization

Add code
Bookmark button
Alert button
Dec 01, 2016
Valentin Dalibard, Michael Schaarschmidt, Eiko Yoneki

Figure 1 for Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization
Figure 2 for Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization
Figure 3 for Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization
Figure 4 for Tuning the Scheduling of Distributed Stochastic Gradient Descent with Bayesian Optimization
Viaarxiv icon