Alert button
Picture for Feryal Behbahani

Feryal Behbahani

Alert button

Genie: Generative Interactive Environments

Add code
Bookmark button
Alert button
Feb 23, 2024
Jake Bruce, Michael Dennis, Ashley Edwards, Jack Parker-Holder, Yuge Shi, Edward Hughes, Matthew Lai, Aditi Mavalankar, Richie Steigerwald, Chris Apps, Yusuf Aytar, Sarah Bechtle, Feryal Behbahani, Stephanie Chan, Nicolas Heess, Lucy Gonzalez, Simon Osindero, Sherjil Ozair, Scott Reed, Jingwei Zhang, Konrad Zolna, Jeff Clune, Nando de Freitas, Satinder Singh, Tim Rocktäschel

Viaarxiv icon

Vision-Language Models as a Source of Rewards

Add code
Bookmark button
Alert button
Dec 14, 2023
Kate Baumli, Satinder Baveja, Feryal Behbahani, Harris Chan, Gheorghe Comanici, Sebastian Flennerhag, Maxime Gazeau, Kristian Holsheimer, Dan Horgan, Michael Laskin, Clare Lyle, Hussain Masoom, Kay McKinney, Volodymyr Mnih, Alexander Neitz, Fabio Pardo, Jack Parker-Holder, John Quan, Tim Rocktäschel, Himanshu Sahni, Tom Schaul, Yannick Schroecker, Stephen Spencer, Richie Steigerwald, Luyu Wang, Lei Zhang

Viaarxiv icon

Structured State Space Models for In-Context Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 09, 2023
Chris Lu, Yannick Schroecker, Albert Gu, Emilio Parisotto, Jakob Foerster, Satinder Singh, Feryal Behbahani

Figure 1 for Structured State Space Models for In-Context Reinforcement Learning
Figure 2 for Structured State Space Models for In-Context Reinforcement Learning
Figure 3 for Structured State Space Models for In-Context Reinforcement Learning
Figure 4 for Structured State Space Models for In-Context Reinforcement Learning
Viaarxiv icon

Hierarchical Reinforcement Learning in Complex 3D Environments

Add code
Bookmark button
Alert button
Feb 28, 2023
Bernardo Avila Pires, Feryal Behbahani, Hubert Soyer, Kyriacos Nikiforou, Thomas Keck, Satinder Singh

Figure 1 for Hierarchical Reinforcement Learning in Complex 3D Environments
Figure 2 for Hierarchical Reinforcement Learning in Complex 3D Environments
Figure 3 for Hierarchical Reinforcement Learning in Complex 3D Environments
Figure 4 for Hierarchical Reinforcement Learning in Complex 3D Environments
Viaarxiv icon

Human-Timescale Adaptation in an Open-Ended Task Space

Add code
Bookmark button
Alert button
Jan 18, 2023
Adaptive Agent Team, Jakob Bauer, Kate Baumli, Satinder Baveja, Feryal Behbahani, Avishkar Bhoopchand, Nathalie Bradley-Schmieg, Michael Chang, Natalie Clay, Adrian Collister, Vibhavari Dasagi, Lucy Gonzalez, Karol Gregor, Edward Hughes, Sheleem Kashem, Maria Loks-Thompson, Hannah Openshaw, Jack Parker-Holder, Shreya Pathak, Nicolas Perez-Nieves, Nemanja Rakicevic, Tim Rocktäschel, Yannick Schroecker, Jakub Sygnowski, Karl Tuyls, Sarah York, Alexander Zacherl, Lei Zhang

Figure 1 for Human-Timescale Adaptation in an Open-Ended Task Space
Figure 2 for Human-Timescale Adaptation in an Open-Ended Task Space
Figure 3 for Human-Timescale Adaptation in an Open-Ended Task Space
Figure 4 for Human-Timescale Adaptation in an Open-Ended Task Space
Viaarxiv icon

Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality

Add code
Bookmark button
Alert button
May 26, 2022
Tom Zahavy, Yannick Schroecker, Feryal Behbahani, Kate Baumli, Sebastian Flennerhag, Shaobo Hou, Satinder Singh

Figure 1 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Figure 2 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Figure 3 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Figure 4 for Discovering Policies with DOMiNO: Diversity Optimization Maintaining Near Optimality
Viaarxiv icon

Model-Value Inconsistency as a Signal for Epistemic Uncertainty

Add code
Bookmark button
Alert button
Dec 08, 2021
Angelos Filos, Eszter Vértes, Zita Marinho, Gregory Farquhar, Diana Borsa, Abram Friesen, Feryal Behbahani, Tom Schaul, André Barreto, Simon Osindero

Figure 1 for Model-Value Inconsistency as a Signal for Epistemic Uncertainty
Figure 2 for Model-Value Inconsistency as a Signal for Epistemic Uncertainty
Figure 3 for Model-Value Inconsistency as a Signal for Epistemic Uncertainty
Figure 4 for Model-Value Inconsistency as a Signal for Epistemic Uncertainty
Viaarxiv icon

On the role of planning in model-based deep reinforcement learning

Add code
Bookmark button
Alert button
Nov 08, 2020
Jessica B. Hamrick, Abram L. Friesen, Feryal Behbahani, Arthur Guez, Fabio Viola, Sims Witherspoon, Thomas Anthony, Lars Buesing, Petar Veličković, Théophane Weber

Figure 1 for On the role of planning in model-based deep reinforcement learning
Figure 2 for On the role of planning in model-based deep reinforcement learning
Figure 3 for On the role of planning in model-based deep reinforcement learning
Figure 4 for On the role of planning in model-based deep reinforcement learning
Viaarxiv icon

Learning Compositional Neural Programs for Continuous Control

Add code
Bookmark button
Alert button
Jul 27, 2020
Thomas Pierrot, Nicolas Perrin, Feryal Behbahani, Alexandre Laterre, Olivier Sigaud, Karim Beguir, Nando de Freitas

Figure 1 for Learning Compositional Neural Programs for Continuous Control
Figure 2 for Learning Compositional Neural Programs for Continuous Control
Figure 3 for Learning Compositional Neural Programs for Continuous Control
Figure 4 for Learning Compositional Neural Programs for Continuous Control
Viaarxiv icon

Acme: A Research Framework for Distributed Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 01, 2020
Matt Hoffman, Bobak Shahriari, John Aslanides, Gabriel Barth-Maron, Feryal Behbahani, Tamara Norman, Abbas Abdolmaleki, Albin Cassirer, Fan Yang, Kate Baumli, Sarah Henderson, Alex Novikov, Sergio Gómez Colmenarejo, Serkan Cabi, Caglar Gulcehre, Tom Le Paine, Andrew Cowie, Ziyu Wang, Bilal Piot, Nando de Freitas

Figure 1 for Acme: A Research Framework for Distributed Reinforcement Learning
Figure 2 for Acme: A Research Framework for Distributed Reinforcement Learning
Figure 3 for Acme: A Research Framework for Distributed Reinforcement Learning
Figure 4 for Acme: A Research Framework for Distributed Reinforcement Learning
Viaarxiv icon