Alert button
Picture for Sarath Chandar

Sarath Chandar

Alert button

Post-hoc Interpretability for Neural NLP: A Survey

Add code
Bookmark button
Alert button
Aug 13, 2021
Andreas Madsen, Siva Reddy, Sarath Chandar

Figure 1 for Post-hoc Interpretability for Neural NLP: A Survey
Figure 2 for Post-hoc Interpretability for Neural NLP: A Survey
Figure 3 for Post-hoc Interpretability for Neural NLP: A Survey
Figure 4 for Post-hoc Interpretability for Neural NLP: A Survey
Viaarxiv icon

Demystifying Neural Language Models' Insensitivity to Word-Order

Add code
Bookmark button
Alert button
Jul 29, 2021
Louis Clouatre, Prasanna Parthasarathi, Amal Zouaq, Sarath Chandar

Figure 1 for Demystifying Neural Language Models' Insensitivity to Word-Order
Figure 2 for Demystifying Neural Language Models' Insensitivity to Word-Order
Figure 3 for Demystifying Neural Language Models' Insensitivity to Word-Order
Figure 4 for Demystifying Neural Language Models' Insensitivity to Word-Order
Viaarxiv icon

Memory Augmented Optimizers for Deep Learning

Add code
Bookmark button
Alert button
Jun 20, 2021
Paul-Aymeric McRae, Prasanna Parthasarathi, Mahmoud Assran, Sarath Chandar

Figure 1 for Memory Augmented Optimizers for Deep Learning
Figure 2 for Memory Augmented Optimizers for Deep Learning
Figure 3 for Memory Augmented Optimizers for Deep Learning
Figure 4 for Memory Augmented Optimizers for Deep Learning
Viaarxiv icon

Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?

Add code
Bookmark button
Alert button
Jun 20, 2021
Prasanna Parthasarathi, Joelle Pineau, Sarath Chandar

Figure 1 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Figure 2 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Figure 3 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Figure 4 for Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Viaarxiv icon

A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss

Add code
Bookmark button
Alert button
Jun 20, 2021
Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau, Sarath Chandar

Figure 1 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Figure 2 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Figure 3 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Figure 4 for A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Viaarxiv icon

A Survey of Data Augmentation Approaches for NLP

Add code
Bookmark button
Alert button
May 29, 2021
Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, Eduard Hovy

Figure 1 for A Survey of Data Augmentation Approaches for NLP
Figure 2 for A Survey of Data Augmentation Approaches for NLP
Figure 3 for A Survey of Data Augmentation Approaches for NLP
Figure 4 for A Survey of Data Augmentation Approaches for NLP
Viaarxiv icon

TAG: Task-based Accumulated Gradients for Lifelong learning

Add code
Bookmark button
Alert button
May 11, 2021
Pranshu Malviya, Balaraman Ravindran, Sarath Chandar

Figure 1 for TAG: Task-based Accumulated Gradients for Lifelong learning
Figure 2 for TAG: Task-based Accumulated Gradients for Lifelong learning
Figure 3 for TAG: Task-based Accumulated Gradients for Lifelong learning
Figure 4 for TAG: Task-based Accumulated Gradients for Lifelong learning
Viaarxiv icon