Alert button
Picture for Markus Wulfmeier

Markus Wulfmeier

Alert button

Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning

Apr 26, 2023
Tuomas Haarnoja, Ben Moran, Guy Lever, Sandy H. Huang, Dhruva Tirumala, Markus Wulfmeier, Jan Humplik, Saran Tunyasuvunakool, Noah Y. Siegel, Roland Hafner, Michael Bloesch, Kristian Hartikainen, Arunkumar Byravan, Leonard Hasenclever, Yuval Tassa, Fereshteh Sadeghi, Nathan Batchelor, Federico Casarini, Stefano Saliceti, Charles Game, Neil Sreendra, Kushal Patel, Marlon Gwira, Andrea Huber, Nicole Hurley, Francesco Nori, Raia Hadsell, Nicolas Heess

Figure 1 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Figure 2 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Figure 3 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Figure 4 for Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning
Viaarxiv icon

SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration

Dec 03, 2022
Giulia Vezzani, Dhruva Tirumala, Markus Wulfmeier, Dushyant Rao, Abbas Abdolmaleki, Ben Moran, Tuomas Haarnoja, Jan Humplik, Roland Hafner, Michael Neunert, Claudio Fantacci, Tim Hertweck, Thomas Lampe, Fereshteh Sadeghi, Nicolas Heess, Martin Riedmiller

Figure 1 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Figure 2 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Figure 3 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Figure 4 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Viaarxiv icon

Solving Continuous Control via Q-learning

Oct 22, 2022
Tim Seyde, Peter Werner, Wilko Schwarting, Igor Gilitschenski, Martin Riedmiller, Daniela Rus, Markus Wulfmeier

Figure 1 for Solving Continuous Control via Q-learning
Figure 2 for Solving Continuous Control via Q-learning
Figure 3 for Solving Continuous Control via Q-learning
Figure 4 for Solving Continuous Control via Q-learning
Viaarxiv icon

MO2: Model-Based Offline Options

Sep 05, 2022
Sasha Salter, Markus Wulfmeier, Dhruva Tirumala, Nicolas Heess, Martin Riedmiller, Raia Hadsell, Dushyant Rao

Figure 1 for MO2: Model-Based Offline Options
Figure 2 for MO2: Model-Based Offline Options
Figure 3 for MO2: Model-Based Offline Options
Figure 4 for MO2: Model-Based Offline Options
Viaarxiv icon

Offline Distillation for Robot Lifelong Learning with Imbalanced Experience

Apr 12, 2022
Wenxuan Zhou, Steven Bohez, Jan Humplik, Abbas Abdolmaleki, Dushyant Rao, Markus Wulfmeier, Tuomas Haarnoja, Nicolas Heess

Figure 1 for Offline Distillation for Robot Lifelong Learning with Imbalanced Experience
Figure 2 for Offline Distillation for Robot Lifelong Learning with Imbalanced Experience
Figure 3 for Offline Distillation for Robot Lifelong Learning with Imbalanced Experience
Figure 4 for Offline Distillation for Robot Lifelong Learning with Imbalanced Experience
Viaarxiv icon

Imitate and Repurpose: Learning Reusable Robot Movement Skills From Human and Animal Behaviors

Mar 31, 2022
Steven Bohez, Saran Tunyasuvunakool, Philemon Brakel, Fereshteh Sadeghi, Leonard Hasenclever, Yuval Tassa, Emilio Parisotto, Jan Humplik, Tuomas Haarnoja, Roland Hafner, Markus Wulfmeier, Michael Neunert, Ben Moran, Noah Siegel, Andrea Huber, Francesco Romano, Nathan Batchelor, Federico Casarini, Josh Merel, Raia Hadsell, Nicolas Heess

Figure 1 for Imitate and Repurpose: Learning Reusable Robot Movement Skills From Human and Animal Behaviors
Figure 2 for Imitate and Repurpose: Learning Reusable Robot Movement Skills From Human and Animal Behaviors
Figure 3 for Imitate and Repurpose: Learning Reusable Robot Movement Skills From Human and Animal Behaviors
Figure 4 for Imitate and Repurpose: Learning Reusable Robot Movement Skills From Human and Animal Behaviors
Viaarxiv icon

The Challenges of Exploration for Offline Reinforcement Learning

Jan 27, 2022
Nathan Lambert, Markus Wulfmeier, William Whitney, Arunkumar Byravan, Michael Bloesch, Vibhavari Dasagi, Tim Hertweck, Martin Riedmiller

Figure 1 for The Challenges of Exploration for Offline Reinforcement Learning
Figure 2 for The Challenges of Exploration for Offline Reinforcement Learning
Figure 3 for The Challenges of Exploration for Offline Reinforcement Learning
Figure 4 for The Challenges of Exploration for Offline Reinforcement Learning
Viaarxiv icon

Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies

Dec 09, 2021
Dushyant Rao, Fereshteh Sadeghi, Leonard Hasenclever, Markus Wulfmeier, Martina Zambelli, Giulia Vezzani, Dhruva Tirumala, Yusuf Aytar, Josh Merel, Nicolas Heess, Raia Hadsell

Figure 1 for Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies
Figure 2 for Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies
Figure 3 for Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies
Figure 4 for Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies
Viaarxiv icon

Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation

Dec 02, 2021
Todor Davchev, Oleg Sushkov, Jean-Baptiste Regli, Stefan Schaal, Yusuf Aytar, Markus Wulfmeier, Jon Scholz

Figure 1 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Figure 2 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Figure 3 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Figure 4 for Wish you were here: Hindsight Goal Selection for long-horizon dexterous manipulation
Viaarxiv icon