Picture for Alessandro Lazaric

Alessandro Lazaric

INRIA Lille - Nord Europe

Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping

Add code
Jan 05, 2023
Figure 1 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Figure 2 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Figure 3 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Figure 4 for Learning Goal-Conditioned Policies Offline with Self-Supervised Reward Shaping
Viaarxiv icon

On the Complexity of Representation Learning in Contextual Linear Bandits

Add code
Dec 19, 2022
Viaarxiv icon

Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler

Add code
Nov 04, 2022
Figure 1 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Figure 2 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Figure 3 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Figure 4 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Viaarxiv icon

Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees

Add code
Oct 24, 2022
Viaarxiv icon

Contextual bandits with concave rewards, and an application to fair ranking

Add code
Oct 18, 2022
Figure 1 for Contextual bandits with concave rewards, and an application to fair ranking
Figure 2 for Contextual bandits with concave rewards, and an application to fair ranking
Figure 3 for Contextual bandits with concave rewards, and an application to fair ranking
Figure 4 for Contextual bandits with concave rewards, and an application to fair ranking
Viaarxiv icon

Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path

Add code
Oct 10, 2022
Figure 1 for Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path
Figure 2 for Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path
Figure 3 for Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path
Viaarxiv icon

Linear Convergence of Natural Policy Gradient Methods with Log-Linear Policies

Add code
Oct 04, 2022
Figure 1 for Linear Convergence of Natural Policy Gradient Methods with Log-Linear Policies
Viaarxiv icon

Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL

Add code
Mar 21, 2022
Figure 1 for Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL
Figure 2 for Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL
Figure 3 for Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL
Figure 4 for Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL
Viaarxiv icon

Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning

Add code
Feb 08, 2022
Figure 1 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 2 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 3 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Figure 4 for Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning
Viaarxiv icon

Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times

Add code
Jan 30, 2022
Figure 1 for Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times
Figure 2 for Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times
Figure 3 for Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times
Figure 4 for Scaling Gaussian Process Optimization by Evaluating a Few Unique Candidates Multiple Times
Viaarxiv icon