Alert button
Picture for Mannes Poel

Mannes Poel

Alert button

Trajectory Generation, Control, and Safety with Denoising Diffusion Probabilistic Models

Add code
Bookmark button
Alert button
Jun 27, 2023
Nicolò Botteghi, Federico Califano, Mannes Poel, Christoph Brune

Figure 1 for Trajectory Generation, Control, and Safety with Denoising Diffusion Probabilistic Models
Figure 2 for Trajectory Generation, Control, and Safety with Denoising Diffusion Probabilistic Models
Figure 3 for Trajectory Generation, Control, and Safety with Denoising Diffusion Probabilistic Models
Figure 4 for Trajectory Generation, Control, and Safety with Denoising Diffusion Probabilistic Models
Viaarxiv icon

Unsupervised Representation Learning in Deep Reinforcement Learning: A Review

Add code
Bookmark button
Alert button
Aug 27, 2022
Nicolò Botteghi, Mannes Poel, Christoph Brune

Figure 1 for Unsupervised Representation Learning in Deep Reinforcement Learning: A Review
Figure 2 for Unsupervised Representation Learning in Deep Reinforcement Learning: A Review
Figure 3 for Unsupervised Representation Learning in Deep Reinforcement Learning: A Review
Figure 4 for Unsupervised Representation Learning in Deep Reinforcement Learning: A Review
Viaarxiv icon

Towards Autonomous Pipeline Inspection with Hierarchical Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 08, 2021
Nicolò Botteghi, Luuk Grefte, Mannes Poel, Beril Sirmacek, Christoph Brune, Edwin Dertien, Stefano Stramigioli

Figure 1 for Towards Autonomous Pipeline Inspection with Hierarchical Reinforcement Learning
Figure 2 for Towards Autonomous Pipeline Inspection with Hierarchical Reinforcement Learning
Figure 3 for Towards Autonomous Pipeline Inspection with Hierarchical Reinforcement Learning
Figure 4 for Towards Autonomous Pipeline Inspection with Hierarchical Reinforcement Learning
Viaarxiv icon

Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics

Add code
Bookmark button
Alert button
Jul 04, 2021
Nicolò Botteghi, Mannes Poel, Beril Sirmacek, Christoph Brune

Figure 1 for Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics
Figure 2 for Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics
Figure 3 for Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics
Figure 4 for Low-Dimensional State and Action Representation Learning with MDP Homomorphism Metrics
Viaarxiv icon

Low Dimensional State Representation Learning with Robotics Priors in Continuous Action Spaces

Add code
Bookmark button
Alert button
Jul 04, 2021
Nicolò Botteghi, Khaled Alaa, Mannes Poel, Beril Sirmacek, Christoph Brune, Abeje Mersha, Stefano Stramigioli

Figure 1 for Low Dimensional State Representation Learning with Robotics Priors in Continuous Action Spaces
Figure 2 for Low Dimensional State Representation Learning with Robotics Priors in Continuous Action Spaces
Figure 3 for Low Dimensional State Representation Learning with Robotics Priors in Continuous Action Spaces
Figure 4 for Low Dimensional State Representation Learning with Robotics Priors in Continuous Action Spaces
Viaarxiv icon

Low Dimensional State Representation Learning with Reward-shaped Priors

Add code
Bookmark button
Alert button
Jul 29, 2020
Nicolò Botteghi, Ruben Obbink, Daan Geijs, Mannes Poel, Beril Sirmacek, Christoph Brune, Abeje Mersha, Stefano Stramigioli

Figure 1 for Low Dimensional State Representation Learning with Reward-shaped Priors
Figure 2 for Low Dimensional State Representation Learning with Reward-shaped Priors
Figure 3 for Low Dimensional State Representation Learning with Reward-shaped Priors
Figure 4 for Low Dimensional State Representation Learning with Reward-shaped Priors
Viaarxiv icon

On Reward Shaping for Mobile Robot Navigation: A Reinforcement Learning and SLAM Based Approach

Add code
Bookmark button
Alert button
Feb 10, 2020
Nicolò Botteghi, Beril Sirmacek, Khaled A. A. Mustafa, Mannes Poel, Stefano Stramigioli

Figure 1 for On Reward Shaping for Mobile Robot Navigation: A Reinforcement Learning and SLAM Based Approach
Figure 2 for On Reward Shaping for Mobile Robot Navigation: A Reinforcement Learning and SLAM Based Approach
Figure 3 for On Reward Shaping for Mobile Robot Navigation: A Reinforcement Learning and SLAM Based Approach
Figure 4 for On Reward Shaping for Mobile Robot Navigation: A Reinforcement Learning and SLAM Based Approach
Viaarxiv icon