Picture for Sebastian Curi

Sebastian Curi

Get Back Here: Robust Imitation by Return-to-Distribution Planning

Add code
May 02, 2023
Figure 1 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 2 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 3 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 4 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Viaarxiv icon

Safe Reinforcement Learning via Confidence-Based Filters

Add code
Jul 04, 2022
Figure 1 for Safe Reinforcement Learning via Confidence-Based Filters
Figure 2 for Safe Reinforcement Learning via Confidence-Based Filters
Figure 3 for Safe Reinforcement Learning via Confidence-Based Filters
Figure 4 for Safe Reinforcement Learning via Confidence-Based Filters
Viaarxiv icon

Constrained Policy Optimization via Bayesian World Models

Add code
Feb 06, 2022
Figure 1 for Constrained Policy Optimization via Bayesian World Models
Figure 2 for Constrained Policy Optimization via Bayesian World Models
Figure 3 for Constrained Policy Optimization via Bayesian World Models
Figure 4 for Constrained Policy Optimization via Bayesian World Models
Viaarxiv icon

Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning

Add code
Mar 18, 2021
Figure 1 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 2 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 3 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 4 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Viaarxiv icon

Risk-Averse Offline Reinforcement Learning

Add code
Feb 10, 2021
Figure 1 for Risk-Averse Offline Reinforcement Learning
Figure 2 for Risk-Averse Offline Reinforcement Learning
Figure 3 for Risk-Averse Offline Reinforcement Learning
Figure 4 for Risk-Averse Offline Reinforcement Learning
Viaarxiv icon

Logistic $Q$-Learning

Add code
Oct 21, 2020
Figure 1 for Logistic $Q$-Learning
Figure 2 for Logistic $Q$-Learning
Figure 3 for Logistic $Q$-Learning
Figure 4 for Logistic $Q$-Learning
Viaarxiv icon

Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning

Add code
Jul 13, 2020
Figure 1 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Figure 2 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Figure 3 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Figure 4 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Viaarxiv icon

Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory

Add code
Jun 19, 2020
Figure 1 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Figure 2 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Figure 3 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Figure 4 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Viaarxiv icon

Adaptive Sampling for Stochastic Risk-Averse Learning

Add code
Oct 28, 2019
Figure 1 for Adaptive Sampling for Stochastic Risk-Averse Learning
Figure 2 for Adaptive Sampling for Stochastic Risk-Averse Learning
Figure 3 for Adaptive Sampling for Stochastic Risk-Averse Learning
Figure 4 for Adaptive Sampling for Stochastic Risk-Averse Learning
Viaarxiv icon

Structured Variational Inference in Unstable Gaussian Process State Space Models

Add code
Jul 16, 2019
Figure 1 for Structured Variational Inference in Unstable Gaussian Process State Space Models
Figure 2 for Structured Variational Inference in Unstable Gaussian Process State Space Models
Figure 3 for Structured Variational Inference in Unstable Gaussian Process State Space Models
Figure 4 for Structured Variational Inference in Unstable Gaussian Process State Space Models
Viaarxiv icon