Picture for Martin Riedmiller

Martin Riedmiller

Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains

Add code
Feb 24, 2023
Figure 1 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Figure 2 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Figure 3 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Figure 4 for Leveraging Jumpy Models for Planning and Fast Learning in Robotic Domains
Viaarxiv icon

SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration

Add code
Dec 03, 2022
Figure 1 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Figure 2 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Figure 3 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Figure 4 for SkillS: Adaptive Skill Sequencing for Efficient Temporally-Extended Exploration
Viaarxiv icon

Solving Continuous Control via Q-learning

Add code
Oct 22, 2022
Viaarxiv icon

MO2: Model-Based Offline Options

Add code
Sep 05, 2022
Figure 1 for MO2: Model-Based Offline Options
Figure 2 for MO2: Model-Based Offline Options
Figure 3 for MO2: Model-Based Offline Options
Figure 4 for MO2: Model-Based Offline Options
Viaarxiv icon

Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach

Add code
Apr 22, 2022
Figure 1 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Figure 2 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Figure 3 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Figure 4 for Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Viaarxiv icon

The Challenges of Exploration for Offline Reinforcement Learning

Add code
Jan 27, 2022
Viaarxiv icon

Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies

Add code
Nov 03, 2021
Figure 1 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 2 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 3 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Figure 4 for Is Bang-Bang Control All You Need? Solving Continuous Control with Bernoulli Policies
Viaarxiv icon

Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes

Add code
Nov 03, 2021
Figure 1 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 2 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 3 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Figure 4 for Beyond Pick-and-Place: Tackling Robotic Stacking of Diverse Shapes
Viaarxiv icon

Evaluating model-based planning and planner amortization for continuous control

Add code
Oct 07, 2021
Figure 1 for Evaluating model-based planning and planner amortization for continuous control
Figure 2 for Evaluating model-based planning and planner amortization for continuous control
Figure 3 for Evaluating model-based planning and planner amortization for continuous control
Figure 4 for Evaluating model-based planning and planner amortization for continuous control
Viaarxiv icon

Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration

Add code
Sep 17, 2021
Figure 1 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Figure 2 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Figure 3 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Figure 4 for Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration
Viaarxiv icon