Picture for Brendan O'Donoghue

Brendan O'Donoghue

Probabilistic Inference in Reinforcement Learning Done Right

Add code
Nov 22, 2023
Figure 1 for Probabilistic Inference in Reinforcement Learning Done Right
Figure 2 for Probabilistic Inference in Reinforcement Learning Done Right
Figure 3 for Probabilistic Inference in Reinforcement Learning Done Right
Figure 4 for Probabilistic Inference in Reinforcement Learning Done Right
Viaarxiv icon

Efficient exploration via epistemic-risk-seeking policy optimization

Add code
Feb 18, 2023
Figure 1 for Efficient exploration via epistemic-risk-seeking policy optimization
Figure 2 for Efficient exploration via epistemic-risk-seeking policy optimization
Figure 3 for Efficient exploration via epistemic-risk-seeking policy optimization
Figure 4 for Efficient exploration via epistemic-risk-seeking policy optimization
Viaarxiv icon

ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs

Add code
Feb 02, 2023
Figure 1 for ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs
Figure 2 for ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs
Figure 3 for ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs
Figure 4 for ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs
Viaarxiv icon

Optimistic Meta-Gradients

Add code
Jan 09, 2023
Figure 1 for Optimistic Meta-Gradients
Figure 2 for Optimistic Meta-Gradients
Figure 3 for Optimistic Meta-Gradients
Figure 4 for Optimistic Meta-Gradients
Viaarxiv icon

POMRL: No-Regret Learning-to-Plan with Increasing Horizons

Add code
Dec 30, 2022
Figure 1 for POMRL: No-Regret Learning-to-Plan with Increasing Horizons
Figure 2 for POMRL: No-Regret Learning-to-Plan with Increasing Horizons
Figure 3 for POMRL: No-Regret Learning-to-Plan with Increasing Horizons
Figure 4 for POMRL: No-Regret Learning-to-Plan with Increasing Horizons
Viaarxiv icon

Variational Bayesian Optimistic Sampling

Add code
Oct 29, 2021
Figure 1 for Variational Bayesian Optimistic Sampling
Figure 2 for Variational Bayesian Optimistic Sampling
Figure 3 for Variational Bayesian Optimistic Sampling
Figure 4 for Variational Bayesian Optimistic Sampling
Viaarxiv icon

Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?

Add code
Oct 09, 2021
Figure 1 for Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?
Figure 2 for Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?
Figure 3 for Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?
Figure 4 for Evaluating Predictive Distributions: Does Bayesian Deep Learning Work?
Viaarxiv icon

Discovering Diverse Nearly Optimal Policies withSuccessor Features

Add code
Jun 01, 2021
Figure 1 for Discovering Diverse Nearly Optimal Policies withSuccessor Features
Figure 2 for Discovering Diverse Nearly Optimal Policies withSuccessor Features
Figure 3 for Discovering Diverse Nearly Optimal Policies withSuccessor Features
Figure 4 for Discovering Diverse Nearly Optimal Policies withSuccessor Features
Viaarxiv icon

Reward is enough for convex MDPs

Add code
Jun 01, 2021
Figure 1 for Reward is enough for convex MDPs
Figure 2 for Reward is enough for convex MDPs
Viaarxiv icon

Discovering a set of policies for the worst case reward

Add code
Feb 08, 2021
Figure 1 for Discovering a set of policies for the worst case reward
Figure 2 for Discovering a set of policies for the worst case reward
Figure 3 for Discovering a set of policies for the worst case reward
Figure 4 for Discovering a set of policies for the worst case reward
Viaarxiv icon