Picture for Alessandro Lazaric

Alessandro Lazaric

INRIA Lille - Nord Europe

A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning

Add code
Oct 19, 2019
Figure 1 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Figure 2 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Figure 3 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Figure 4 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Viaarxiv icon

Word-order biases in deep-agent emergent communication

Add code
Jun 14, 2019
Figure 1 for Word-order biases in deep-agent emergent communication
Figure 2 for Word-order biases in deep-agent emergent communication
Figure 3 for Word-order biases in deep-agent emergent communication
Figure 4 for Word-order biases in deep-agent emergent communication
Viaarxiv icon

Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

Add code
Mar 13, 2019
Figure 1 for Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret
Viaarxiv icon

Active Exploration in Markov Decision Processes

Add code
Feb 28, 2019
Figure 1 for Active Exploration in Markov Decision Processes
Figure 2 for Active Exploration in Markov Decision Processes
Figure 3 for Active Exploration in Markov Decision Processes
Figure 4 for Active Exploration in Markov Decision Processes
Viaarxiv icon

Exploration Bonus for Regret Minimization in Undiscounted Discrete and Continuous Markov Decision Processes

Add code
Dec 11, 2018
Viaarxiv icon

Rotting bandits are no harder than stochastic ones

Add code
Nov 27, 2018
Figure 1 for Rotting bandits are no harder than stochastic ones
Figure 2 for Rotting bandits are no harder than stochastic ones
Viaarxiv icon

Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning

Add code
Jul 06, 2018
Figure 1 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Figure 2 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Figure 3 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Figure 4 for Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning
Viaarxiv icon

Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes

Add code
Jul 06, 2018
Figure 1 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Figure 2 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Figure 3 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Figure 4 for Near Optimal Exploration-Exploitation in Non-Communicating Markov Decision Processes
Viaarxiv icon

Reinforcement Learning in Rich-Observation MDPs using Spectral Methods

Add code
Jun 19, 2018
Figure 1 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Figure 2 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Figure 3 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Figure 4 for Reinforcement Learning in Rich-Observation MDPs using Spectral Methods
Viaarxiv icon

Distributed Adaptive Sampling for Kernel Matrix Approximation

Add code
Mar 27, 2018
Figure 1 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Figure 2 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Figure 3 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Figure 4 for Distributed Adaptive Sampling for Kernel Matrix Approximation
Viaarxiv icon