Alert button
Picture for Sebastian Curi

Sebastian Curi

Alert button

Get Back Here: Robust Imitation by Return-to-Distribution Planning

Add code
Bookmark button
Alert button
May 02, 2023
Geoffrey Cideron, Baruch Tabanpour, Sebastian Curi, Sertan Girgin, Leonard Hussenot, Gabriel Dulac-Arnold, Matthieu Geist, Olivier Pietquin, Robert Dadashi

Figure 1 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 2 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 3 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Figure 4 for Get Back Here: Robust Imitation by Return-to-Distribution Planning
Viaarxiv icon

Safe Reinforcement Learning via Confidence-Based Filters

Add code
Bookmark button
Alert button
Jul 04, 2022
Sebastian Curi, Armin Lederer, Sandra Hirche, Andreas Krause

Figure 1 for Safe Reinforcement Learning via Confidence-Based Filters
Figure 2 for Safe Reinforcement Learning via Confidence-Based Filters
Figure 3 for Safe Reinforcement Learning via Confidence-Based Filters
Figure 4 for Safe Reinforcement Learning via Confidence-Based Filters
Viaarxiv icon

Constrained Policy Optimization via Bayesian World Models

Add code
Bookmark button
Alert button
Feb 06, 2022
Yarden As, Ilnura Usmanova, Sebastian Curi, Andreas Krause

Figure 1 for Constrained Policy Optimization via Bayesian World Models
Figure 2 for Constrained Policy Optimization via Bayesian World Models
Figure 3 for Constrained Policy Optimization via Bayesian World Models
Figure 4 for Constrained Policy Optimization via Bayesian World Models
Viaarxiv icon

Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 18, 2021
Sebastian Curi, Ilija Bogunovic, Andreas Krause

Figure 1 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 2 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 3 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Figure 4 for Combining Pessimism with Optimism for Robust and Efficient Model-Based Deep Reinforcement Learning
Viaarxiv icon

Risk-Averse Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 10, 2021
Núria Armengol Urpí, Sebastian Curi, Andreas Krause

Figure 1 for Risk-Averse Offline Reinforcement Learning
Figure 2 for Risk-Averse Offline Reinforcement Learning
Figure 3 for Risk-Averse Offline Reinforcement Learning
Figure 4 for Risk-Averse Offline Reinforcement Learning
Viaarxiv icon

Logistic $Q$-Learning

Add code
Bookmark button
Alert button
Oct 21, 2020
Joan Bas-Serrano, Sebastian Curi, Andreas Krause, Gergely Neu

Figure 1 for Logistic $Q$-Learning
Figure 2 for Logistic $Q$-Learning
Figure 3 for Logistic $Q$-Learning
Figure 4 for Logistic $Q$-Learning
Viaarxiv icon

Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning

Add code
Bookmark button
Alert button
Jul 13, 2020
Sebastian Curi, Felix Berkenkamp, Andreas Krause

Figure 1 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Figure 2 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Figure 3 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Figure 4 for Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning
Viaarxiv icon

Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory

Add code
Bookmark button
Alert button
Jun 19, 2020
Lenart Treven, Sebastian Curi, Mojmir Mutny, Andreas Krause

Figure 1 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Figure 2 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Figure 3 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Figure 4 for Learning Controllers for Unstable Linear Quadratic Regulators from a Single Trajectory
Viaarxiv icon