Alert button
Picture for Sarath Chandar

Sarath Chandar

Alert button

Lookbehind Optimizer: k steps back, 1 step forward

Add code
Bookmark button
Alert button
Jul 31, 2023
Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, Sarath Chandar

Figure 1 for Lookbehind Optimizer: k steps back, 1 step forward
Figure 2 for Lookbehind Optimizer: k steps back, 1 step forward
Figure 3 for Lookbehind Optimizer: k steps back, 1 step forward
Figure 4 for Lookbehind Optimizer: k steps back, 1 step forward
Viaarxiv icon

Promoting Exploration in Memory-Augmented Adam using Critical Momenta

Add code
Bookmark button
Alert button
Jul 18, 2023
Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, Sarath Chandar

Figure 1 for Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Figure 2 for Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Figure 3 for Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Figure 4 for Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Viaarxiv icon

Thompson sampling for improved exploration in GFlowNets

Add code
Bookmark button
Alert button
Jun 30, 2023
Jarrid Rector-Brooks, Kanika Madan, Moksh Jain, Maksym Korablyov, Cheng-Hao Liu, Sarath Chandar, Nikolay Malkin, Yoshua Bengio

Figure 1 for Thompson sampling for improved exploration in GFlowNets
Figure 2 for Thompson sampling for improved exploration in GFlowNets
Figure 3 for Thompson sampling for improved exploration in GFlowNets
Viaarxiv icon

Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models

Add code
Bookmark button
Alert button
May 24, 2023
Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, Sarath Chandar

Figure 1 for Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
Figure 2 for Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
Figure 3 for Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
Figure 4 for Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
Viaarxiv icon

Should We Attend More or Less? Modulating Attention for Fairness

Add code
Bookmark button
Alert button
May 22, 2023
Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Sarath Chandar

Figure 1 for Should We Attend More or Less? Modulating Attention for Fairness
Figure 2 for Should We Attend More or Less? Modulating Attention for Fairness
Figure 3 for Should We Attend More or Less? Modulating Attention for Fairness
Figure 4 for Should We Attend More or Less? Modulating Attention for Fairness
Viaarxiv icon

Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 16, 2023
Xutong Zhao, Yangchen Pan, Chenjun Xiao, Sarath Chandar, Janarthanan Rajendran

Figure 1 for Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
Figure 2 for Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
Figure 3 for Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
Figure 4 for Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
Viaarxiv icon

Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 15, 2023
Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, Sarath Chandar

Figure 1 for Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning
Figure 2 for Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning
Figure 3 for Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning
Figure 4 for Replay Buffer With Local Forgetting for Adaptive Deep Model-Based Reinforcement Learning
Viaarxiv icon

Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning

Add code
Bookmark button
Alert button
Feb 06, 2023
Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, Sarath Chandar

Figure 1 for Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning
Figure 2 for Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning
Figure 3 for Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning
Figure 4 for Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning
Viaarxiv icon

Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness

Add code
Bookmark button
Alert button
Nov 25, 2022
Abdelrahman Zayed, Prasanna Parthasarathi, Goncalo Mordido, Hamid Palangi, Samira Shabanian, Sarath Chandar

Figure 1 for Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
Figure 2 for Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
Figure 3 for Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
Figure 4 for Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
Viaarxiv icon