Picture for Alessandro Lazaric

Alessandro Lazaric

INRIA Lille - Nord Europe

Near-linear Time Gaussian Process Optimization with Adaptive Batching and Resparsification

Add code
Feb 26, 2020
Figure 1 for Near-linear Time Gaussian Process Optimization with Adaptive Batching and Resparsification
Figure 2 for Near-linear Time Gaussian Process Optimization with Adaptive Batching and Resparsification
Figure 3 for Near-linear Time Gaussian Process Optimization with Adaptive Batching and Resparsification
Figure 4 for Near-linear Time Gaussian Process Optimization with Adaptive Batching and Resparsification
Viaarxiv icon

Adversarial Attacks on Linear Contextual Bandits

Add code
Feb 11, 2020
Figure 1 for Adversarial Attacks on Linear Contextual Bandits
Figure 2 for Adversarial Attacks on Linear Contextual Bandits
Figure 3 for Adversarial Attacks on Linear Contextual Bandits
Figure 4 for Adversarial Attacks on Linear Contextual Bandits
Viaarxiv icon

Improved Algorithms for Conservative Exploration in Bandits

Add code
Feb 08, 2020
Figure 1 for Improved Algorithms for Conservative Exploration in Bandits
Figure 2 for Improved Algorithms for Conservative Exploration in Bandits
Figure 3 for Improved Algorithms for Conservative Exploration in Bandits
Figure 4 for Improved Algorithms for Conservative Exploration in Bandits
Viaarxiv icon

Conservative Exploration in Reinforcement Learning

Add code
Feb 08, 2020
Figure 1 for Conservative Exploration in Reinforcement Learning
Figure 2 for Conservative Exploration in Reinforcement Learning
Figure 3 for Conservative Exploration in Reinforcement Learning
Figure 4 for Conservative Exploration in Reinforcement Learning
Viaarxiv icon

Concentration Inequalities for Multinoulli Random Variables

Add code
Jan 30, 2020
Viaarxiv icon

No-Regret Exploration in Goal-Oriented Reinforcement Learning

Add code
Jan 30, 2020
Figure 1 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 2 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 3 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Figure 4 for No-Regret Exploration in Goal-Oriented Reinforcement Learning
Viaarxiv icon

Frequentist Regret Bounds for Randomized Least-Squares Value Iteration

Add code
Nov 01, 2019
Figure 1 for Frequentist Regret Bounds for Randomized Least-Squares Value Iteration
Viaarxiv icon

A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning

Add code
Oct 19, 2019
Figure 1 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Figure 2 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Figure 3 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Figure 4 for A Structured Prediction Approach for Generalization in Cooperative Multi-Agent Reinforcement Learning
Viaarxiv icon

Word-order biases in deep-agent emergent communication

Add code
Jun 14, 2019
Figure 1 for Word-order biases in deep-agent emergent communication
Figure 2 for Word-order biases in deep-agent emergent communication
Figure 3 for Word-order biases in deep-agent emergent communication
Figure 4 for Word-order biases in deep-agent emergent communication
Viaarxiv icon

Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret

Add code
Mar 13, 2019
Figure 1 for Gaussian Process Optimization with Adaptive Sketching: Scalable and No Regret
Viaarxiv icon