Picture for Aida Nematzadeh

Aida Nematzadeh

Pragmatics in Grounded Language Learning: Phenomena, Tasks, and Modeling Approaches

Add code
Nov 15, 2022
Viaarxiv icon

MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting

Add code
Oct 13, 2022
Figure 1 for MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Figure 2 for MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Figure 3 for MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Figure 4 for MAPL: Parameter-Efficient Adaptation of Unimodal Pre-Trained Models for Vision-Language Few-Shot Prompting
Viaarxiv icon

Rethinking Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization

Add code
May 24, 2022
Figure 1 for Rethinking Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization
Figure 2 for Rethinking Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization
Figure 3 for Rethinking Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization
Figure 4 for Rethinking Evaluation Practices in Visual Question Answering: A Case Study on Out-of-Distribution Generalization
Viaarxiv icon

Flamingo: a Visual Language Model for Few-Shot Learning

Add code
Apr 29, 2022
Figure 1 for Flamingo: a Visual Language Model for Few-Shot Learning
Figure 2 for Flamingo: a Visual Language Model for Few-Shot Learning
Figure 3 for Flamingo: a Visual Language Model for Few-Shot Learning
Figure 4 for Flamingo: a Visual Language Model for Few-Shot Learning
Viaarxiv icon

Scaling Language Models: Methods, Analysis & Insights from Training Gopher

Add code
Dec 08, 2021
Figure 1 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 2 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 3 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Figure 4 for Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Viaarxiv icon

A Systematic Investigation of Commonsense Understanding in Large Language Models

Add code
Oct 31, 2021
Figure 1 for A Systematic Investigation of Commonsense Understanding in Large Language Models
Figure 2 for A Systematic Investigation of Commonsense Understanding in Large Language Models
Figure 3 for A Systematic Investigation of Commonsense Understanding in Large Language Models
Figure 4 for A Systematic Investigation of Commonsense Understanding in Large Language Models
Viaarxiv icon

Probing Image-Language Transformers for Verb Understanding

Add code
Jun 16, 2021
Figure 1 for Probing Image-Language Transformers for Verb Understanding
Figure 2 for Probing Image-Language Transformers for Verb Understanding
Figure 3 for Probing Image-Language Transformers for Verb Understanding
Figure 4 for Probing Image-Language Transformers for Verb Understanding
Viaarxiv icon

Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers

Add code
Jan 31, 2021
Figure 1 for Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers
Figure 2 for Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers
Figure 3 for Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers
Figure 4 for Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers
Viaarxiv icon

Competition in Cross-situational Word Learning: A Computational Study

Add code
Dec 06, 2020
Figure 1 for Competition in Cross-situational Word Learning: A Computational Study
Figure 2 for Competition in Cross-situational Word Learning: A Computational Study
Viaarxiv icon

Learning to Segment Actions from Observation and Narration

Add code
May 07, 2020
Figure 1 for Learning to Segment Actions from Observation and Narration
Figure 2 for Learning to Segment Actions from Observation and Narration
Figure 3 for Learning to Segment Actions from Observation and Narration
Figure 4 for Learning to Segment Actions from Observation and Narration
Viaarxiv icon