Picture for Matteo Pirotta

Matteo Pirotta

Regret Bounds for Kernel-Based Reinforcement Learning

Add code
Apr 12, 2020
Figure 1 for Regret Bounds for Kernel-Based Reinforcement Learning
Figure 2 for Regret Bounds for Kernel-Based Reinforcement Learning
Figure 3 for Regret Bounds for Kernel-Based Reinforcement Learning
Viaarxiv icon

Active Model Estimation in Markov Decision Processes

Add code
Mar 06, 2020
Figure 1 for Active Model Estimation in Markov Decision Processes
Figure 2 for Active Model Estimation in Markov Decision Processes
Figure 3 for Active Model Estimation in Markov Decision Processes
Figure 4 for Active Model Estimation in Markov Decision Processes
Viaarxiv icon

Exploration-Exploitation in Constrained MDPs

Add code
Mar 04, 2020
Figure 1 for Exploration-Exploitation in Constrained MDPs
Viaarxiv icon

Adversarial Attacks on Linear Contextual Bandits

Add code
Feb 11, 2020
Figure 1 for Adversarial Attacks on Linear Contextual Bandits
Figure 2 for Adversarial Attacks on Linear Contextual Bandits
Figure 3 for Adversarial Attacks on Linear Contextual Bandits
Figure 4 for Adversarial Attacks on Linear Contextual Bandits
Viaarxiv icon

Improved Algorithms for Conservative Exploration in Bandits

Add code
Feb 08, 2020
Figure 1 for Improved Algorithms for Conservative Exploration in Bandits
Figure 2 for Improved Algorithms for Conservative Exploration in Bandits
Figure 3 for Improved Algorithms for Conservative Exploration in Bandits
Figure 4 for Improved Algorithms for Conservative Exploration in Bandits
Viaarxiv icon

Conservative Exploration in Reinforcement Learning

Add code
Feb 08, 2020
Figure 1 for Conservative Exploration in Reinforcement Learning
Figure 2 for Conservative Exploration in Reinforcement Learning
Figure 3 for Conservative Exploration in Reinforcement Learning
Figure 4 for Conservative Exploration in Reinforcement Learning
Viaarxiv icon

Concentration Inequalities for Multinoulli Random Variables

Add code
Jan 30, 2020
Viaarxiv icon

No-Regret Exploration in Goal-Oriented Reinforcement Learning

Add code
Jan 30, 2020
Figure 1 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 2 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 3 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 4 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Viaarxiv icon

Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning

Add code
Jan 13, 2020
Figure 1 for Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning
Figure 2 for Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning
Figure 3 for Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning
Figure 4 for Exploiting Language Instructions for Interpretable and Compositional Reinforcement Learning
Viaarxiv icon

Frequentist Regret Bounds for Randomized Least-Squares Value Iteration

Add code
Nov 01, 2019
Figure 1 for Frequentist Regret Bounds for Randomized Least-Squares Value Iteration
Viaarxiv icon