Alert button
Picture for Sebastian Blaes

Sebastian Blaes

Alert button

Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 11, 2023
Marin Vlastelica, Sebastian Blaes, Cristina Pineri, Georg Martius

Figure 1 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Figure 2 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Figure 3 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Figure 4 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Viaarxiv icon

Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World

Add code
Bookmark button
Alert button
Sep 04, 2023
Nico Gürtler, Felix Widmaier, Cansu Sancaktar, Sebastian Blaes, Pavel Kolev, Stefan Bauer, Manuel Wüthrich, Markus Wulfmeier, Martin Riedmiller, Arthur Allshire, Qiang Wang, Robert McCarthy, Hangyeol Kim, Jongchan Baek Pohang, Wookyong Kwon, Shanliang Qian, Yasunori Toshimitsu, Mike Yan Michelis, Amirhossein Kazemipour, Arman Raayatsanati, Hehui Zheng, Barnabas Gavin Cangan, Bernhard Schölkopf, Georg Martius

Figure 1 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Figure 2 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Figure 3 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Figure 4 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Viaarxiv icon

Benchmarking Offline Reinforcement Learning on Real-Robot Hardware

Add code
Bookmark button
Alert button
Jul 28, 2023
Nico Gürtler, Sebastian Blaes, Pavel Kolev, Felix Widmaier, Manuel Wüthrich, Stefan Bauer, Bernhard Schölkopf, Georg Martius

Figure 1 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Figure 2 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Figure 3 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Figure 4 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Viaarxiv icon

Optimistic Active Exploration of Dynamical Systems

Add code
Bookmark button
Alert button
Jun 21, 2023
Bhavya Sukhija, Lenart Treven, Cansu Sancaktar, Sebastian Blaes, Stelian Coros, Andreas Krause

Figure 1 for Optimistic Active Exploration of Dynamical Systems
Figure 2 for Optimistic Active Exploration of Dynamical Systems
Figure 3 for Optimistic Active Exploration of Dynamical Systems
Figure 4 for Optimistic Active Exploration of Dynamical Systems
Viaarxiv icon

Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions

Add code
Bookmark button
Alert button
Sep 16, 2022
Chenhao Li, Sebastian Blaes, Pavel Kolev, Marin Vlastelica, Jonas Frey, Georg Martius

Figure 1 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 2 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 3 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 4 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Viaarxiv icon

Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations

Add code
Bookmark button
Alert button
Jun 23, 2022
Chenhao Li, Marin Vlastelica, Sebastian Blaes, Jonas Frey, Felix Grimminger, Georg Martius

Figure 1 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 2 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 3 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 4 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Viaarxiv icon

Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation

Add code
Bookmark button
Alert button
Jun 22, 2022
Cansu Sancaktar, Sebastian Blaes, Georg Martius

Figure 1 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Figure 2 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Figure 3 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Figure 4 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Viaarxiv icon

Sample-efficient Cross-Entropy Method for Real-time Planning

Add code
Bookmark button
Alert button
Aug 14, 2020
Cristina Pinneri, Shambhuraj Sawant, Sebastian Blaes, Jan Achterhold, Joerg Stueckler, Michal Rolinek, Georg Martius

Figure 1 for Sample-efficient Cross-Entropy Method for Real-time Planning
Figure 2 for Sample-efficient Cross-Entropy Method for Real-time Planning
Figure 3 for Sample-efficient Cross-Entropy Method for Real-time Planning
Figure 4 for Sample-efficient Cross-Entropy Method for Real-time Planning
Viaarxiv icon

Control What You Can: Intrinsically Motivated Task-Planning Agent

Add code
Bookmark button
Alert button
Jun 19, 2019
Sebastian Blaes, Marin Vlastelica Pogančić, Jia-Jie Zhu, Georg Martius

Figure 1 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Figure 2 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Figure 3 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Figure 4 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Viaarxiv icon