Alert button
Picture for David Hoeller

David Hoeller

Alert button

Learning a State Representation and Navigation in Cluttered and Dynamic Environments

Mar 07, 2021
David Hoeller, Lorenz Wellhausen, Farbod Farshidian, Marco Hutter

Figure 1 for Learning a State Representation and Navigation in Cluttered and Dynamic Environments
Figure 2 for Learning a State Representation and Navigation in Cluttered and Dynamic Environments
Figure 3 for Learning a State Representation and Navigation in Cluttered and Dynamic Environments
Figure 4 for Learning a State Representation and Navigation in Cluttered and Dynamic Environments
Viaarxiv icon

Joint Space Control via Deep Reinforcement Learning

Nov 12, 2020
Visak Kumar, David Hoeller, Balakumar Sundaralingam, Jonathan Tremblay, Stan Birchfield

Figure 1 for Joint Space Control via Deep Reinforcement Learning
Figure 2 for Joint Space Control via Deep Reinforcement Learning
Figure 3 for Joint Space Control via Deep Reinforcement Learning
Figure 4 for Joint Space Control via Deep Reinforcement Learning
Viaarxiv icon

Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion

Oct 05, 2020
Xingye Da, Zhaoming Xie, David Hoeller, Byron Boots, Animashree Anandkumar, Yuke Zhu, Buck Babich, Animesh Garg

Figure 1 for Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
Figure 2 for Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
Figure 3 for Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
Figure 4 for Learning a Contact-Adaptive Controller for Robust, Efficient Legged Locomotion
Viaarxiv icon

Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot

Mar 06, 2020
Napat Karnchanachari, Miguel I. Valls, David Hoeller, Marco Hutter

Figure 1 for Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot
Figure 2 for Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot
Figure 3 for Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot
Figure 4 for Practical Reinforcement Learning For MPC: Learning from sparse objectives in under an hour on a real robot
Viaarxiv icon

Deep Value Model Predictive Control

Oct 08, 2019
Farbod Farshidian, David Hoeller, Marco Hutter

Figure 1 for Deep Value Model Predictive Control
Figure 2 for Deep Value Model Predictive Control
Figure 3 for Deep Value Model Predictive Control
Figure 4 for Deep Value Model Predictive Control
Viaarxiv icon