Picture for Sebastian Blaes

Sebastian Blaes

Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning

Add code
Sep 11, 2023
Figure 1 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Figure 2 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Figure 3 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Figure 4 for Mind the Uncertainty: Risk-Aware and Actively Exploring Model-Based Reinforcement Learning
Viaarxiv icon

Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World

Add code
Sep 04, 2023
Figure 1 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Figure 2 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Figure 3 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Figure 4 for Real Robot Challenge 2022: Learning Dexterous Manipulation from Offline Data in the Real World
Viaarxiv icon

Benchmarking Offline Reinforcement Learning on Real-Robot Hardware

Add code
Jul 28, 2023
Figure 1 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Figure 2 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Figure 3 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Figure 4 for Benchmarking Offline Reinforcement Learning on Real-Robot Hardware
Viaarxiv icon

Optimistic Active Exploration of Dynamical Systems

Add code
Jun 21, 2023
Figure 1 for Optimistic Active Exploration of Dynamical Systems
Figure 2 for Optimistic Active Exploration of Dynamical Systems
Figure 3 for Optimistic Active Exploration of Dynamical Systems
Figure 4 for Optimistic Active Exploration of Dynamical Systems
Viaarxiv icon

Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions

Add code
Sep 16, 2022
Figure 1 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 2 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 3 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Figure 4 for Versatile Skill Control via Self-supervised Adversarial Imitation of Unlabeled Mixed Motions
Viaarxiv icon

Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations

Add code
Jun 23, 2022
Figure 1 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 2 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 3 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Figure 4 for Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations
Viaarxiv icon

Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation

Add code
Jun 22, 2022
Figure 1 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Figure 2 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Figure 3 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Figure 4 for Curious Exploration via Structured World Models Yields Zero-Shot Object Manipulation
Viaarxiv icon

Sample-efficient Cross-Entropy Method for Real-time Planning

Add code
Aug 14, 2020
Figure 1 for Sample-efficient Cross-Entropy Method for Real-time Planning
Figure 2 for Sample-efficient Cross-Entropy Method for Real-time Planning
Figure 3 for Sample-efficient Cross-Entropy Method for Real-time Planning
Figure 4 for Sample-efficient Cross-Entropy Method for Real-time Planning
Viaarxiv icon

Control What You Can: Intrinsically Motivated Task-Planning Agent

Add code
Jun 19, 2019
Figure 1 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Figure 2 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Figure 3 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Figure 4 for Control What You Can: Intrinsically Motivated Task-Planning Agent
Viaarxiv icon