Alert button
Picture for Yuval Tassa

Yuval Tassa

Alert button

DeepMind Control Suite

Add code
Bookmark button
Alert button
Jan 02, 2018
Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, Timothy Lillicrap, Martin Riedmiller

Figure 1 for DeepMind Control Suite
Figure 2 for DeepMind Control Suite
Figure 3 for DeepMind Control Suite
Figure 4 for DeepMind Control Suite
Viaarxiv icon

Emergence of Locomotion Behaviours in Rich Environments

Add code
Bookmark button
Alert button
Jul 10, 2017
Nicolas Heess, Dhruva TB, Srinivasan Sriram, Jay Lemmon, Josh Merel, Greg Wayne, Yuval Tassa, Tom Erez, Ziyu Wang, S. M. Ali Eslami, Martin Riedmiller, David Silver

Figure 1 for Emergence of Locomotion Behaviours in Rich Environments
Figure 2 for Emergence of Locomotion Behaviours in Rich Environments
Figure 3 for Emergence of Locomotion Behaviours in Rich Environments
Figure 4 for Emergence of Locomotion Behaviours in Rich Environments
Viaarxiv icon

Learning human behaviors from motion capture by adversarial imitation

Add code
Bookmark button
Alert button
Jul 10, 2017
Josh Merel, Yuval Tassa, Dhruva TB, Sriram Srinivasan, Jay Lemmon, Ziyu Wang, Greg Wayne, Nicolas Heess

Figure 1 for Learning human behaviors from motion capture by adversarial imitation
Figure 2 for Learning human behaviors from motion capture by adversarial imitation
Figure 3 for Learning human behaviors from motion capture by adversarial imitation
Figure 4 for Learning human behaviors from motion capture by adversarial imitation
Viaarxiv icon

Data-efficient Deep Reinforcement Learning for Dexterous Manipulation

Add code
Bookmark button
Alert button
Apr 10, 2017
Ivaylo Popov, Nicolas Heess, Timothy Lillicrap, Roland Hafner, Gabriel Barth-Maron, Matej Vecerik, Thomas Lampe, Yuval Tassa, Tom Erez, Martin Riedmiller

Figure 1 for Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Figure 2 for Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Figure 3 for Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Figure 4 for Data-efficient Deep Reinforcement Learning for Dexterous Manipulation
Viaarxiv icon

Learning and Transfer of Modulated Locomotor Controllers

Add code
Bookmark button
Alert button
Oct 17, 2016
Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, David Silver

Figure 1 for Learning and Transfer of Modulated Locomotor Controllers
Figure 2 for Learning and Transfer of Modulated Locomotor Controllers
Figure 3 for Learning and Transfer of Modulated Locomotor Controllers
Figure 4 for Learning and Transfer of Modulated Locomotor Controllers
Viaarxiv icon

Attend, Infer, Repeat: Fast Scene Understanding with Generative Models

Add code
Bookmark button
Alert button
Aug 12, 2016
S. M. Ali Eslami, Nicolas Heess, Theophane Weber, Yuval Tassa, David Szepesvari, Koray Kavukcuoglu, Geoffrey E. Hinton

Figure 1 for Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
Figure 2 for Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
Figure 3 for Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
Figure 4 for Attend, Infer, Repeat: Fast Scene Understanding with Generative Models
Viaarxiv icon

Continuous control with deep reinforcement learning

Add code
Bookmark button
Alert button
Feb 29, 2016
Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra

Figure 1 for Continuous control with deep reinforcement learning
Figure 2 for Continuous control with deep reinforcement learning
Figure 3 for Continuous control with deep reinforcement learning
Figure 4 for Continuous control with deep reinforcement learning
Viaarxiv icon

Learning Continuous Control Policies by Stochastic Value Gradients

Add code
Bookmark button
Alert button
Oct 30, 2015
Nicolas Heess, Greg Wayne, David Silver, Timothy Lillicrap, Yuval Tassa, Tom Erez

Figure 1 for Learning Continuous Control Policies by Stochastic Value Gradients
Figure 2 for Learning Continuous Control Policies by Stochastic Value Gradients
Figure 3 for Learning Continuous Control Policies by Stochastic Value Gradients
Figure 4 for Learning Continuous Control Policies by Stochastic Value Gradients
Viaarxiv icon