Alert button
Picture for Sarath Chandar

Sarath Chandar

Alert button

Post-hoc Interpretability for Neural NLP: A Survey

Aug 10, 2021
Andreas Madsen, Siva Reddy, Sarath Chandar

Figure 1 for Post-hoc Interpretability for Neural NLP: A Survey
Figure 2 for Post-hoc Interpretability for Neural NLP: A Survey
Figure 3 for Post-hoc Interpretability for Neural NLP: A Survey
Figure 4 for Post-hoc Interpretability for Neural NLP: A Survey
Viaarxiv icon

Demystifying Neural Language Models' Insensitivity to Word-Order

Jul 29, 2021
Louis Clouatre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar

Figure 1 for Demystifying Neural Language Models' Insensitivity to Word-Order
Figure 2 for Demystifying Neural Language Models' Insensitivity to Word-Order
Figure 3 for Demystifying Neural Language Models' Insensitivity to Word-Order
Figure 4 for Demystifying Neural Language Models' Insensitivity to Word-Order
Viaarxiv icon

Memory Augmented Optimizers for Deep Learning

Jun 20, 2021
Paul-Aymeric McRae, Prasanna Parthasarathi, Mahmoud Assran, Sarath Chandar

Figure 1 for Memory Augmented Optimizers for Deep Learning
Figure 2 for Memory Augmented Optimizers for Deep Learning
Figure 3 for Memory Augmented Optimizers for Deep Learning
Figure 4 for Memory Augmented Optimizers for Deep Learning
Viaarxiv icon

Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?

Jun 20, 2021
Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar

Figure 1 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Figure 2 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Figure 3 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Figure 4 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Viaarxiv icon

A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss

Jun 20, 2021
Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau, Sarath Chandar

Figure 1 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Figure 2 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Figure 3 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Figure 4 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Viaarxiv icon

A Survey of Data Augmentation Approaches for NLP

May 29, 2021
Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy

Figure 1 for A Survey of Data Augmentation Approaches for NLP
Figure 2 for A Survey of Data Augmentation Approaches for NLP
Figure 3 for A Survey of Data Augmentation Approaches for NLP
Figure 4 for A Survey of Data Augmentation Approaches for NLP
Viaarxiv icon

TAG: Task-based Accumulated Gradients for Lifelong learning

May 11, 2021
Pranshu Malviya, Balaraman Ravindran, Sarath Chandar

Figure 1 for TAG: Task-based Accumulated Gradients for Lifelong learning
Figure 2 for TAG: Task-based Accumulated Gradients for Lifelong learning
Figure 3 for TAG: Task-based Accumulated Gradients for Lifelong learning
Figure 4 for TAG: Task-based Accumulated Gradients for Lifelong learning
Viaarxiv icon

Continuous Coordination As a Realistic Scenario for Lifelong Learning

Mar 04, 2021
Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville, Sarath Chandar

Figure 1 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Figure 2 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Figure 3 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Figure 4 for Continuous Coordination As a Realistic Scenario for Lifelong Learning
Viaarxiv icon