Alert button
Picture for Martin Riedmiller

Martin Riedmiller

Alert button

Revisiting Gaussian mixture critic in off-policy reinforcement learning: a sample-based approach

Add code
Bookmark button
Alert button
Apr 21, 2022
Bobak Shahriari, Abbas Abdolmaleki, Arunkumar Byravan, Abe Friesen, Siqi Liu, Jost Tobias Springenberg, Nicolas Heess, Matt Hoffman, Martin Riedmiller

Figure 1 for Revisiting Gaussian mixture critic in off-policy reinforcement learning: a sample-based approach
Figure 2 for Revisiting Gaussian mixture critic in off-policy reinforcement learning: a sample-based approach
Figure 3 for Revisiting Gaussian mixture critic in off-policy reinforcement learning: a sample-based approach
Figure 4 for Revisiting Gaussian mixture critic in off-policy reinforcement learning: a sample-based approach
Viaarxiv icon

The Challenges of Exploration for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 27, 2022
Nathan Lambert, Markus Wulfmeier, William Whitney, Arunkumar Byravan, Michael Bloesch, Vibhavari Dasagi, Tim Hertweck, Martin Riedmiller

Figure 1 for The Challenges of Exploration for Offline Reinforcement Learning
Figure 2 for The Challenges of Exploration for Offline Reinforcement Learning
Figure 3 for The Challenges of Exploration for Offline Reinforcement Learning
Figure 4 for The Challenges of Exploration for Offline Reinforcement Learning
Viaarxiv icon

Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies

Add code
Bookmark button
Alert button
Nov 03, 2021
Tim Seyde, Igor Gilitschenski, Wilko Schwarting, Bartolomeo Stellato, Martin Riedmiller, Markus Wulfmeier, Daniela Rus

Figure 1 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 2 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 3 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 4 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Viaarxiv icon

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes

Add code
Bookmark button
Alert button
Nov 03, 2021
Alex X. Lee, Coline Devin, Yuxiang Zhou, Thomas Lampe, Konstantinos Bousmalis, Jost Tobias Springenberg, Arunkumar Byravan, Abbas Abdolmaleki, Nimrod Gileadi, David Khosid, Claudio Fantacci, Jose Enrique Chen, Akhil Raju, Rae Jeong, Michael Neunert, Antoine Laurens, Stefano Saliceti, Federico Casarini, Martin Riedmiller, Raia Hadsell, Francesco Nori

Figure 1 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 2 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 3 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 4 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Viaarxiv icon

Evaluating model-based planning and planner amortization for continuous control

Add code
Bookmark button
Alert button
Oct 07, 2021
Arunkumar Byravan, Leonard Hasenclever, Piotr Trochim, Mehdi Mirza, Alessandro Davide Ialongo, Yuval Tassa, Jost Tobias Springenberg, Abbas Abdolmaleki, Nicolas Heess, Josh Merel, Martin Riedmiller

Figure 1 for Evaluating model-based planning and planner amortization for continuous control
Figure 2 for Evaluating model-based planning and planner amortization for continuous control
Figure 3 for Evaluating model-based planning and planner amortization for continuous control
Figure 4 for Evaluating model-based planning and planner amortization for continuous control
Viaarxiv icon

Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration

Add code
Bookmark button
Alert button
Sep 17, 2021
Oliver Groth, Markus Wulfmeier, Giulia Vezzani, Vibhavari Dasagi, Tim Hertweck, Roland Hafner, Nicolas Heess, Martin Riedmiller

Figure 1 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Figure 2 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Figure 3 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Figure 4 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Viaarxiv icon

Collect & Infer -- a fresh look at data-efficient Reinforcement Learning

Add code
Bookmark button
Alert button
Aug 23, 2021
Martin Riedmiller, Jost Tobias Springenberg, Roland Hafner, Nicolas Heess

Viaarxiv icon

On Multi-objective Policy Optimization as a Tool for Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 15, 2021
Abbas Abdolmaleki, Sandy H. Huang, Giulia Vezzani, Bobak Shahriari, Jost Tobias Springenberg, Shruti Mishra, Dhruva TB, Arunkumar Byravan, Konstantinos Bousmalis, Andras Gyorgy, Csaba Szepesvari, Raia Hadsell, Nicolas Heess, Martin Riedmiller

Figure 1 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Figure 2 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Figure 3 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Figure 4 for On Multi-objective Policy Optimization as a Tool for Reinforcement Learning
Viaarxiv icon

Rethinking Exploration for Sample-Efficient Policy Learning

Add code
Bookmark button
Alert button
Jan 23, 2021
William F. Whitney, Michael Bloesch, Jost Tobias Springenberg, Abbas Abdolmaleki, Martin Riedmiller

Figure 1 for Rethinking Exploration for Sample-Efficient Policy Learning
Figure 2 for Rethinking Exploration for Sample-Efficient Policy Learning
Figure 3 for Rethinking Exploration for Sample-Efficient Policy Learning
Figure 4 for Rethinking Exploration for Sample-Efficient Policy Learning
Viaarxiv icon