Alert button
Picture for Alessandro Lazaric

Alessandro Lazaric

Alert button

Word-order biases in deep-agent emergent communication

Add code
Bookmark button
Alert button
Jun 04, 2019
Rahma Chaabouni, Eugene Kharitonov, Alessandro Lazaric, Emmanuel Dupoux, Marco Baroni

Figure 1 for Word-order biases in deep-agent emergent communication
Figure 2 for Word-order biases in deep-agent emergent communication
Figure 3 for Word-order biases in deep-agent emergent communication
Figure 4 for Word-order biases in deep-agent emergent communication
Viaarxiv icon

Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

Add code
Bookmark button
Alert button
Mar 13, 2019
Daniele Calandriello, Luigi Carratino, Alessandro Lazaric, Michal Valko, Lorenzo Rosasco

Figure 1 for Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret
Viaarxiv icon

Active Exploration in Markov Decision Processes

Add code
Bookmark button
Alert button
Feb 28, 2019
Jean Tarbouriech, Alessandro Lazaric

Figure 1 for Active Exploration in Markov Decision Processes
Figure 2 for Active Exploration in Markov Decision Processes
Figure 3 for Active Exploration in Markov Decision Processes
Figure 4 for Active Exploration in Markov Decision Processes
Viaarxiv icon

Exploration Bonus for Regret Minimization in Undiscounted Discrete and Continuous Markov Decision Processes

Add code
Bookmark button
Alert button
Dec 11, 2018
Jian Qian, Ronan Fruit, Matteo Pirotta, Alessandro Lazaric

Viaarxiv icon

Rotting bandits are no harder than stochastic ones

Add code
Bookmark button
Alert button
Nov 27, 2018
Julien Seznec, Andrea Locatelli, Alexandra Carpentier, Alessandro Lazaric, Michal Valko

Figure 1 for Rotting bandits are no harder than stochastic ones
Figure 2 for Rotting bandits are no harder than stochastic ones
Viaarxiv icon

Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 06, 2018
Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, Ronald Ortner

Figure 1 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Figure 2 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Figure 3 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Figure 4 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Viaarxiv icon

Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes

Add code
Bookmark button
Alert button
Jul 06, 2018
Ronan Fruit, Matteo Pirotta, Alessandro Lazaric

Figure 1 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Figure 2 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Figure 3 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Figure 4 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Viaarxiv icon

Reinforcement Learning in Rich-Observation MDPs using Spectral Methods

Add code
Bookmark button
Alert button
Jun 19, 2018
Kamyar Azizzadenesheli, Alessandro Lazaric, Animashree Anandkumar

Figure 1 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Figure 2 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Figure 3 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Figure 4 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Viaarxiv icon

Distributed Adaptive Sampling for Kernel Matrix Approximation

Add code
Bookmark button
Alert button
Mar 27, 2018
Daniele Calandriello, Alessandro Lazaric, Michal Valko

Figure 1 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Figure 2 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Figure 3 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Figure 4 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Viaarxiv icon