Alert button
Picture for Roland Hafner

Roland Hafner

Alert button

Representation Matters: Improving Perception and Exploration for Robotics

Add code
Bookmark button
Alert button
Nov 03, 2020
Markus Wulfmeier, Arunkumar Byravan, Tim Hertweck, Irina Higgins, Ankush Gupta, Tejas Kulkarni, Malcolm Reynolds, Denis Teplyashin, Roland Hafner, Thomas Lampe, Martin Riedmiller

Figure 1 for Representation Matters: Improving Perception and Exploration for Robotics
Figure 2 for Representation Matters: Improving Perception and Exploration for Robotics
Figure 3 for Representation Matters: Improving Perception and Exploration for Robotics
Figure 4 for Representation Matters: Improving Perception and Exploration for Robotics
Viaarxiv icon

"What, not how": Solving an under-actuated insertion task from scratch

Add code
Bookmark button
Alert button
Oct 30, 2020
Giulia Vezzani, Michael Neunert, Markus Wulfmeier, Rae Jeong, Thomas Lampe, Noah Siegel, Roland Hafner, Abbas Abdolmaleki, Martin Riedmiller, Francesco Nori

Figure 1 for "What, not how": Solving an under-actuated insertion task from scratch
Figure 2 for "What, not how": Solving an under-actuated insertion task from scratch
Figure 3 for "What, not how": Solving an under-actuated insertion task from scratch
Figure 4 for "What, not how": Solving an under-actuated insertion task from scratch
Viaarxiv icon

Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion

Add code
Bookmark button
Alert button
Aug 06, 2020
Roland Hafner, Tim Hertweck, Philipp Klöppner, Michael Bloesch, Michael Neunert, Markus Wulfmeier, Saran Tunyasuvunakool, Nicolas Heess, Martin Riedmiller

Figure 1 for Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion
Figure 2 for Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion
Figure 3 for Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion
Figure 4 for Towards General and Autonomous Learning of Core Skills: A Case Study in Locomotion
Viaarxiv icon

Data-efficient Hindsight Off-policy Option Learning

Add code
Bookmark button
Alert button
Jul 30, 2020
Markus Wulfmeier, Dushyant Rao, Roland Hafner, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Michael Neunert, Dhruva Tirumala, Noah Siegel, Nicolas Heess, Martin Riedmiller

Figure 1 for Data-efficient Hindsight Off-policy Option Learning
Figure 2 for Data-efficient Hindsight Off-policy Option Learning
Figure 3 for Data-efficient Hindsight Off-policy Option Learning
Figure 4 for Data-efficient Hindsight Off-policy Option Learning
Viaarxiv icon

Simple Sensor Intentions for Exploration

Add code
Bookmark button
Alert button
May 15, 2020
Tim Hertweck, Martin Riedmiller, Michael Bloesch, Jost Tobias Springenberg, Noah Siegel, Markus Wulfmeier, Roland Hafner, Nicolas Heess

Figure 1 for Simple Sensor Intentions for Exploration
Figure 2 for Simple Sensor Intentions for Exploration
Figure 3 for Simple Sensor Intentions for Exploration
Figure 4 for Simple Sensor Intentions for Exploration
Viaarxiv icon

Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 23, 2020
Noah Y. Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, Martin Riedmiller

Figure 1 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Figure 2 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Figure 3 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Figure 4 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Viaarxiv icon

Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics

Add code
Bookmark button
Alert button
Jan 02, 2020
Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Jost Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin Riedmiller

Figure 1 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Figure 2 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Figure 3 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Figure 4 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Viaarxiv icon

Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models

Add code
Bookmark button
Alert button
Oct 09, 2019
Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Siegel, Nicolas Heess, Martin Riedmiller

Figure 1 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Figure 2 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Figure 3 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Figure 4 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Viaarxiv icon

Regularized Hierarchical Policies for Compositional Transfer in Robotics

Add code
Bookmark button
Alert button
Jun 27, 2019
Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Tim Hertweck, Thomas Lampe, Noah Siegel, Nicolas Heess, Martin Riedmiller

Figure 1 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Figure 2 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Figure 3 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Figure 4 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Viaarxiv icon