Alert button
Picture for Matteo Pirotta

Matteo Pirotta

Alert button

Simple Ingredients for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 19, 2024
Edoardo Cetin, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric, Yann Ollivier, Ahmed Touati

Figure 1 for Simple Ingredients for Offline Reinforcement Learning
Figure 2 for Simple Ingredients for Offline Reinforcement Learning
Figure 3 for Simple Ingredients for Offline Reinforcement Learning
Figure 4 for Simple Ingredients for Offline Reinforcement Learning
Viaarxiv icon

Layered State Discovery for Incremental Autonomous Exploration

Add code
Bookmark button
Alert button
Feb 07, 2023
Liyu Chen, Andrea Tirinzoni, Alessandro Lazaric, Matteo Pirotta

Figure 1 for Layered State Discovery for Incremental Autonomous Exploration
Figure 2 for Layered State Discovery for Incremental Autonomous Exploration
Viaarxiv icon

On the Complexity of Representation Learning in Contextual Linear Bandits

Add code
Bookmark button
Alert button
Dec 19, 2022
Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric

Figure 1 for On the Complexity of Representation Learning in Contextual Linear Bandits
Viaarxiv icon

Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler

Add code
Bookmark button
Alert button
Nov 04, 2022
Yifang Chen, Karthik Sankararaman, Alessandro Lazaric, Matteo Pirotta, Dmytro Karamshuk, Qifan Wang, Karishma Mandyam, Sinong Wang, Han Fang

Figure 1 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Figure 2 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Figure 3 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Figure 4 for Improved Adaptive Algorithm for Scalable Active Learning with Weak Labeler
Viaarxiv icon

Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees

Add code
Bookmark button
Alert button
Oct 24, 2022
Andrea Tirinzoni, Matteo Papini, Ahmed Touati, Alessandro Lazaric, Matteo Pirotta

Figure 1 for Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees
Figure 2 for Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees
Figure 3 for Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees
Figure 4 for Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees
Viaarxiv icon

Contextual bandits with concave rewards, and an application to fair ranking

Add code
Bookmark button
Alert button
Oct 18, 2022
Virginie Do, Elvis Dohmatob, Matteo Pirotta, Alessandro Lazaric, Nicolas Usunier

Figure 1 for Contextual bandits with concave rewards, and an application to fair ranking
Figure 2 for Contextual bandits with concave rewards, and an application to fair ranking
Figure 3 for Contextual bandits with concave rewards, and an application to fair ranking
Figure 4 for Contextual bandits with concave rewards, and an application to fair ranking
Viaarxiv icon

Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path

Add code
Bookmark button
Alert button
Oct 10, 2022
Liyu Chen, Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric

Figure 1 for Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path
Figure 2 for Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path
Figure 3 for Reaching Goals is Hard: Settling the Sample Complexity of the Stochastic Shortest Path
Viaarxiv icon

Top $K$ Ranking for Multi-Armed Bandit with Noisy Evaluations

Add code
Bookmark button
Alert button
Dec 14, 2021
Evrard Garcelon, Vashist Avadhanula, Alessandro Lazaric, Matteo Pirotta

Figure 1 for Top $K$ Ranking for Multi-Armed Bandit with Noisy Evaluations
Figure 2 for Top $K$ Ranking for Multi-Armed Bandit with Noisy Evaluations
Figure 3 for Top $K$ Ranking for Multi-Armed Bandit with Noisy Evaluations
Figure 4 for Top $K$ Ranking for Multi-Armed Bandit with Noisy Evaluations
Viaarxiv icon

Privacy Amplification via Shuffling for Linear Contextual Bandits

Add code
Bookmark button
Alert button
Dec 11, 2021
Evrard Garcelon, Kamalika Chaudhuri, Vianney Perchet, Matteo Pirotta

Figure 1 for Privacy Amplification via Shuffling for Linear Contextual Bandits
Figure 2 for Privacy Amplification via Shuffling for Linear Contextual Bandits
Viaarxiv icon

Differentially Private Exploration in Reinforcement Learning with Linear Representation

Add code
Bookmark button
Alert button
Dec 07, 2021
Paul Luyo, Evrard Garcelon, Alessandro Lazaric, Matteo Pirotta

Figure 1 for Differentially Private Exploration in Reinforcement Learning with Linear Representation
Viaarxiv icon