Alert button
Picture for Sarath Chandar

Sarath Chandar

Alert button

Continuous Coordination As a Realistic Scenario for Lifelong Learning

Add code
Bookmark button
Alert button
Mar 04, 2021
Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville, Sarath Chandar

Figure 1 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Figure 2 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Figure 3 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Figure 4 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Viaarxiv icon

IIRC: Incremental Implicitly-Refined Classification

Add code
Bookmark button
Alert button
Jan 11, 2021
Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, Sarath Chandar

Figure 1 for IIRC: Incremental Implicitly-Refined Classification
Figure 2 for IIRC: Incremental Implicitly-Refined Classification
Figure 3 for IIRC: Incremental Implicitly-Refined Classification
Figure 4 for IIRC: Incremental Implicitly-Refined Classification
Viaarxiv icon

Maximum Reward Formulation In Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 08, 2020
Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Sahir, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E. Taylor, Sarath Chandar

Figure 1 for Maximum Reward Formulation In Reinforcement Learning
Figure 2 for Maximum Reward Formulation In Reinforcement Learning
Figure 3 for Maximum Reward Formulation In Reinforcement Learning
Figure 4 for Maximum Reward Formulation In Reinforcement Learning
Viaarxiv icon

MLMLM: Link Prediction with Mean Likelihood Masked Language Model

Add code
Bookmark button
Alert button
Sep 15, 2020
Louis Clouatre, Philippe Trempe, Amal Zouaq, Sarath Chandar

Figure 1 for MLMLM: Link Prediction with Mean Likelihood Masked Language Model
Figure 2 for MLMLM: Link Prediction with Mean Likelihood Masked Language Model
Figure 3 for MLMLM: Link Prediction with Mean Likelihood Masked Language Model
Figure 4 for MLMLM: Link Prediction with Mean Likelihood Masked Language Model
Viaarxiv icon

How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics

Add code
Bookmark button
Alert button
Aug 24, 2020
Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar

Figure 1 for How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics
Figure 2 for How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics
Figure 3 for How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics
Figure 4 for How To Evaluate Your Dialogue System: Probe Tasks as an Alternative for Token-level Evaluation Metrics
Viaarxiv icon

Slot Contrastive Networks: A Contrastive Approach for Representing Objects

Add code
Bookmark button
Alert button
Jul 18, 2020
Evan Racah, Sarath Chandar

Figure 1 for Slot Contrastive Networks: A Contrastive Approach for Representing Objects
Figure 2 for Slot Contrastive Networks: A Contrastive Approach for Representing Objects
Figure 3 for Slot Contrastive Networks: A Contrastive Approach for Representing Objects
Figure 4 for Slot Contrastive Networks: A Contrastive Approach for Representing Objects
Viaarxiv icon

The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 07, 2020
Harm van Seijen, Hadi Nekoei, Evan Racah, Sarath Chandar

Figure 1 for The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
Figure 2 for The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
Figure 3 for The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
Figure 4 for The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
Viaarxiv icon

PatchUp: A Regularization Technique for Convolutional Neural Networks

Add code
Bookmark button
Alert button
Jun 14, 2020
Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, Sarath Chandar

Figure 1 for PatchUp: A Regularization Technique for Convolutional Neural Networks
Figure 2 for PatchUp: A Regularization Technique for Convolutional Neural Networks
Figure 3 for PatchUp: A Regularization Technique for Convolutional Neural Networks
Figure 4 for PatchUp: A Regularization Technique for Convolutional Neural Networks
Viaarxiv icon

Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning

Add code
Bookmark button
Alert button
May 20, 2020
Sai Krishna Gottipati, Boris Sattarov, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam M. J. Thomas, Simon Blackburn, Connor W. Coley, Jian Tang, Sarath Chandar, Yoshua Bengio

Figure 1 for Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
Figure 2 for Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
Figure 3 for Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
Figure 4 for Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
Viaarxiv icon