Alert button
Picture for Lucas N. Alegre

Lucas N. Alegre

Alert button

Open RL Benchmark: Comprehensive Tracked Experiments for Reinforcement Learning

Add code
Bookmark button
Alert button
Feb 05, 2024
Shengyi Huang, Quentin Gallouédec, Florian Felten, Antonin Raffin, Rousslan Fernand Julien Dossa, Yanxiao Zhao, Ryan Sullivan, Viktor Makoviychuk, Denys Makoviichuk, Mohamad H. Danesh, Cyril Roumégous, Jiayi Weng, Chufan Chen, Md Masudur Rahman, João G. M. Araújo, Guorui Quan, Daniel Tan, Timo Klein, Rujikorn Charakorn, Mark Towers, Yann Berthelot, Kinal Mehta, Dipam Chakraborty, Arjun KG, Valentin Charraut, Chang Ye, Zichen Liu, Lucas N. Alegre, Alexander Nikulin, Xiao Hu, Tianlin Liu, Jongwook Choi, Brent Yi

Viaarxiv icon

Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization

Add code
Bookmark button
Alert button
Jan 18, 2023
Lucas N. Alegre, Ana L. C. Bazzan, Diederik M. Roijers, Ann Nowé, Bruno C. da Silva

Figure 1 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Figure 2 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Figure 3 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Figure 4 for Sample-Efficient Multi-Objective Learning via Generalized Policy Improvement Prioritization
Viaarxiv icon

Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer

Add code
Bookmark button
Alert button
Jun 22, 2022
Lucas N. Alegre, Ana L. C. Bazzan, Bruno C. da Silva

Figure 1 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Figure 2 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Figure 3 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Figure 4 for Optimistic Linear Support and Successor Features as a Basis for Optimal Policy Transfer
Viaarxiv icon

Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection

Add code
Bookmark button
Alert button
May 20, 2021
Lucas N. Alegre, Ana L. C. Bazzan, Bruno C. da Silva

Figure 1 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Figure 2 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Figure 3 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Figure 4 for Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection
Viaarxiv icon

Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control

Add code
Bookmark button
Alert button
Apr 09, 2020
Lucas N. Alegre, Ana L. C. Bazzan, Bruno C. da Silva

Figure 1 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Figure 2 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Figure 3 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Figure 4 for Quantifying the Impact of Non-Stationarity in Reinforcement Learning-Based Traffic Signal Control
Viaarxiv icon