Alert button
Picture for Michael Lutter

Michael Lutter

Alert button

Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 07, 2023
Daniel Palenicek, Michael Lutter, Joao Carvalho, Jan Peters

Figure 1 for Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning
Figure 2 for Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning
Figure 3 for Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning
Figure 4 for Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning
Viaarxiv icon

Revisiting Model-based Value Expansion

Add code
Bookmark button
Alert button
Mar 28, 2022
Daniel Palenicek, Michael Lutter, Jan Peters

Figure 1 for Revisiting Model-based Value Expansion
Figure 2 for Revisiting Model-based Value Expansion
Viaarxiv icon

A Differentiable Newton-Euler Algorithm for Real-World Robotics

Add code
Bookmark button
Alert button
Oct 24, 2021
Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters

Figure 1 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Figure 2 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Figure 3 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Figure 4 for A Differentiable Newton-Euler Algorithm for Real-World Robotics
Viaarxiv icon

Continuous-Time Fitted Value Iteration for Robust Policies

Add code
Bookmark button
Alert button
Oct 05, 2021
Michael Lutter, Boris Belousov, Shie Mannor, Dieter Fox, Animesh Garg, Jan Peters

Figure 1 for Continuous-Time Fitted Value Iteration for Robust Policies
Figure 2 for Continuous-Time Fitted Value Iteration for Robust Policies
Figure 3 for Continuous-Time Fitted Value Iteration for Robust Policies
Figure 4 for Continuous-Time Fitted Value Iteration for Robust Policies
Viaarxiv icon

Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models

Add code
Bookmark button
Alert button
Oct 05, 2021
Michael Lutter, Jan Peters

Figure 1 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Figure 2 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Figure 3 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Figure 4 for Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models
Viaarxiv icon

Learning Dynamics Models for Model Predictive Agents

Add code
Bookmark button
Alert button
Sep 29, 2021
Michael Lutter, Leonard Hasenclever, Arunkumar Byravan, Gabriel Dulac-Arnold, Piotr Trochim, Nicolas Heess, Josh Merel, Yuval Tassa

Figure 1 for Learning Dynamics Models for Model Predictive Agents
Figure 2 for Learning Dynamics Models for Model Predictive Agents
Figure 3 for Learning Dynamics Models for Model Predictive Agents
Figure 4 for Learning Dynamics Models for Model Predictive Agents
Viaarxiv icon

Robust Value Iteration for Continuous Control Tasks

Add code
Bookmark button
Alert button
May 25, 2021
Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

Figure 1 for Robust Value Iteration for Continuous Control Tasks
Figure 2 for Robust Value Iteration for Continuous Control Tasks
Figure 3 for Robust Value Iteration for Continuous Control Tasks
Figure 4 for Robust Value Iteration for Continuous Control Tasks
Viaarxiv icon

Value Iteration in Continuous Actions, States and Time

Add code
Bookmark button
Alert button
May 10, 2021
Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

Figure 1 for Value Iteration in Continuous Actions, States and Time
Figure 2 for Value Iteration in Continuous Actions, States and Time
Figure 3 for Value Iteration in Continuous Actions, States and Time
Figure 4 for Value Iteration in Continuous Actions, States and Time
Viaarxiv icon

Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 03, 2020
Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters

Figure 1 for Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
Figure 2 for Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
Figure 3 for Differentiable Physics Models for Real-world Offline Model-based Reinforcement Learning
Viaarxiv icon

High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards

Add code
Bookmark button
Alert button
Oct 31, 2020
Kai Ploeger, Michael Lutter, Jan Peters

Figure 1 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Figure 2 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Figure 3 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Figure 4 for High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards
Viaarxiv icon