Alert button
Picture for Evrard Garcelon

Evrard Garcelon

Alert button

Improved Algorithms for Conservative Exploration in Bandits

Feb 08, 2020
Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric, Matteo Pirotta

Figure 1 for Improved Algorithms for Conservative Exploration in Bandits
Figure 2 for Improved Algorithms for Conservative Exploration in Bandits
Figure 3 for Improved Algorithms for Conservative Exploration in Bandits
Figure 4 for Improved Algorithms for Conservative Exploration in Bandits
Viaarxiv icon

Conservative Exploration in Reinforcement Learning

Feb 08, 2020
Evrard Garcelon, Mohammad Ghavamzadeh, Alessandro Lazaric, Matteo Pirotta

Figure 1 for Conservative Exploration in Reinforcement Learning
Figure 2 for Conservative Exploration in Reinforcement Learning
Figure 3 for Conservative Exploration in Reinforcement Learning
Figure 4 for Conservative Exploration in Reinforcement Learning
Viaarxiv icon

No-Regret Exploration in Goal-Oriented Reinforcement Learning

Jan 30, 2020
Jean Tarbouriech, Evrard Garcelon, Michal Valko, Matteo Pirotta, Alessandro Lazaric

Figure 1 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 2 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 3 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 4 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Viaarxiv icon

Bandits with Side Observations: Bounded vs. Logarithmic Regret

Jul 10, 2018
Rémy Degenne, Evrard Garcelon, Vianney Perchet

Figure 1 for Bandits with Side Observations: Bounded vs. Logarithmic Regret
Figure 2 for Bandits with Side Observations: Bounded vs. Logarithmic Regret
Figure 3 for Bandits with Side Observations: Bounded vs. Logarithmic Regret
Figure 4 for Bandits with Side Observations: Bounded vs. Logarithmic Regret
Viaarxiv icon