Alert button
Picture for Alessandro Lazaric

Alessandro Lazaric

Alert button

Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning

Jul 20, 2021
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto

Figure 1 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Figure 2 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Figure 3 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Figure 4 for Mastering Visual Continuous Control: Improved Data-Augmented Reinforcement Learning
Viaarxiv icon

A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs

Jun 24, 2021
Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric

Figure 1 for A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs
Viaarxiv icon

A Unified Framework for Conservative Exploration

Jun 22, 2021
Yunchang Yang, Tianhao Wu, Han Zhong, Evrard Garcelon, Matteo Pirotta, Alessandro Lazaric, Liwei Wang, Simon S. Du

Figure 1 for A Unified Framework for Conservative Exploration
Viaarxiv icon

Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret

Apr 22, 2021
Jean Tarbouriech, Runlong Zhou, Simon S. Du, Matteo Pirotta, Michal Valko, Alessandro Lazaric

Figure 1 for Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret
Figure 2 for Stochastic Shortest Path: Minimax, Parameter-Free and Towards Horizon-Free Regret
Viaarxiv icon

Leveraging Good Representations in Linear Contextual Bandits

Apr 08, 2021
Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta

Figure 1 for Leveraging Good Representations in Linear Contextual Bandits
Figure 2 for Leveraging Good Representations in Linear Contextual Bandits
Figure 3 for Leveraging Good Representations in Linear Contextual Bandits
Figure 4 for Leveraging Good Representations in Linear Contextual Bandits
Viaarxiv icon

Reinforcement Learning with Prototypical Representations

Feb 22, 2021
Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto

Figure 1 for Reinforcement Learning with Prototypical Representations
Figure 2 for Reinforcement Learning with Prototypical Representations
Figure 3 for Reinforcement Learning with Prototypical Representations
Figure 4 for Reinforcement Learning with Prototypical Representations
Viaarxiv icon

Improved Sample Complexity for Incremental Autonomous Exploration in MDPs

Dec 29, 2020
Jean Tarbouriech, Matteo Pirotta, Michal Valko, Alessandro Lazaric

Figure 1 for Improved Sample Complexity for Incremental Autonomous Exploration in MDPs
Figure 2 for Improved Sample Complexity for Incremental Autonomous Exploration in MDPs
Figure 3 for Improved Sample Complexity for Incremental Autonomous Exploration in MDPs
Figure 4 for Improved Sample Complexity for Incremental Autonomous Exploration in MDPs
Viaarxiv icon

An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits

Oct 23, 2020
Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric

Figure 1 for An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits
Figure 2 for An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits
Figure 3 for An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits
Figure 4 for An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits
Viaarxiv icon

Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration

Aug 18, 2020
Andrea Zanette, Alessandro Lazaric, Mykel J. Kochenderfer, Emma Brunskill

Figure 1 for Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration
Figure 2 for Provably Efficient Reward-Agnostic Navigation with Linear Value Iteration
Viaarxiv icon

Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation

Jul 13, 2020
Marc Abeille, Alessandro Lazaric

Figure 1 for Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation
Figure 2 for Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation
Figure 3 for Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation
Figure 4 for Efficient Optimistic Exploration in Linear-Quadratic Regulators via Lagrangian Relaxation
Viaarxiv icon