Alert button
Picture for Thomas Lampe

Thomas Lampe

Alert button

Representation Matters: Improving Perception and Exploration for Robotics

Nov 03, 2020
Markus Wulfmeier, Arunkumar Byravan, Tim Hertweck, Irina Higgins, Ankush Gupta, Tejas Kulkarni, Malcolm Reynolds, Denis Teplyashin, Roland Hafner, Thomas Lampe, Martin Riedmiller

Figure 1 for Representation Matters: Improving Perception and Exploration for Robotics
Figure 2 for Representation Matters: Improving Perception and Exploration for Robotics
Figure 3 for Representation Matters: Improving Perception and Exploration for Robotics
Figure 4 for Representation Matters: Improving Perception and Exploration for Robotics
Viaarxiv icon

"What, not how": Solving an under-actuated insertion task from scratch

Oct 30, 2020
Giulia Vezzani, Michael Neunert, Markus Wulfmeier, Rae Jeong, Thomas Lampe, Noah Siegel, Roland Hafner, Abbas Abdolmaleki, Martin Riedmiller, Francesco Nori

Figure 1 for "What, not how": Solving an under-actuated insertion task from scratch
Figure 2 for "What, not how": Solving an under-actuated insertion task from scratch
Figure 3 for "What, not how": Solving an under-actuated insertion task from scratch
Figure 4 for "What, not how": Solving an under-actuated insertion task from scratch
Viaarxiv icon

Data-efficient Hindsight Off-policy Option Learning

Jul 30, 2020
Markus Wulfmeier, Dushyant Rao, Roland Hafner, Thomas Lampe, Abbas Abdolmaleki, Tim Hertweck, Michael Neunert, Dhruva Tirumala, Noah Siegel, Nicolas Heess, Martin Riedmiller

Figure 1 for Data-efficient Hindsight Off-policy Option Learning
Figure 2 for Data-efficient Hindsight Off-policy Option Learning
Figure 3 for Data-efficient Hindsight Off-policy Option Learning
Figure 4 for Data-efficient Hindsight Off-policy Option Learning
Viaarxiv icon

Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning

Feb 23, 2020
Noah Y. Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, Nicolas Heess, Martin Riedmiller

Figure 1 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Figure 2 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Figure 3 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Figure 4 for Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning
Viaarxiv icon

Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics

Jan 02, 2020
Michael Neunert, Abbas Abdolmaleki, Markus Wulfmeier, Thomas Lampe, Jost Tobias Springenberg, Roland Hafner, Francesco Romano, Jonas Buchli, Nicolas Heess, Martin Riedmiller

Figure 1 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Figure 2 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Figure 3 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Figure 4 for Continuous-Discrete Reinforcement Learning for Hybrid Control in Robotics
Viaarxiv icon

Modelling Generalized Forces with Reinforcement Learning for Sim-to-Real Transfer

Oct 21, 2019
Rae Jeong, Jackie Kay, Francesco Romano, Thomas Lampe, Tom Rothorl, Abbas Abdolmaleki, Tom Erez, Yuval Tassa, Francesco Nori

Figure 1 for Modelling Generalized Forces with Reinforcement Learning for Sim-to-Real Transfer
Figure 2 for Modelling Generalized Forces with Reinforcement Learning for Sim-to-Real Transfer
Figure 3 for Modelling Generalized Forces with Reinforcement Learning for Sim-to-Real Transfer
Figure 4 for Modelling Generalized Forces with Reinforcement Learning for Sim-to-Real Transfer
Viaarxiv icon

Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation

Oct 21, 2019
Rae Jeong, Yusuf Aytar, David Khosid, Yuxiang Zhou, Jackie Kay, Thomas Lampe, Konstantinos Bousmalis, Francesco Nori

Figure 1 for Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation
Figure 2 for Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation
Figure 3 for Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation
Figure 4 for Self-Supervised Sim-to-Real Adaptation for Visual Robotic Manipulation
Viaarxiv icon

Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models

Oct 09, 2019
Arunkumar Byravan, Jost Tobias Springenberg, Abbas Abdolmaleki, Roland Hafner, Michael Neunert, Thomas Lampe, Noah Siegel, Nicolas Heess, Martin Riedmiller

Figure 1 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Figure 2 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Figure 3 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Figure 4 for Imagined Value Gradients: Model-Based Policy Optimization with Transferable Latent Dynamics Models
Viaarxiv icon

Regularized Hierarchical Policies for Compositional Transfer in Robotics

Jun 27, 2019
Markus Wulfmeier, Abbas Abdolmaleki, Roland Hafner, Jost Tobias Springenberg, Michael Neunert, Tim Hertweck, Thomas Lampe, Noah Siegel, Nicolas Heess, Martin Riedmiller

Figure 1 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Figure 2 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Figure 3 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Figure 4 for Regularized Hierarchical Policies for Compositional Transfer in Robotics
Viaarxiv icon