Picture for Gerard de Melo

Gerard de Melo

Shammie

FOCUS: Effective Embedding Initialization for Specializing Pretrained Multilingual Models on a Single Language

Add code
May 23, 2023
Viaarxiv icon

MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models

Add code
Mar 16, 2023
Figure 1 for MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models
Figure 2 for MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models
Figure 3 for MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models
Figure 4 for MultiModal Bias: Introducing a Framework for Stereotypical Bias Assessment beyond Gender and Race in Vision Language Models
Viaarxiv icon

ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities

Add code
Oct 11, 2022
Figure 1 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 2 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 3 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Figure 4 for ViLPAct: A Benchmark for Compositional Generalization on Multimodal Human Activities
Viaarxiv icon

Frozen CLIP Models are Efficient Video Learners

Add code
Aug 06, 2022
Figure 1 for Frozen CLIP Models are Efficient Video Learners
Figure 2 for Frozen CLIP Models are Efficient Video Learners
Figure 3 for Frozen CLIP Models are Efficient Video Learners
Figure 4 for Frozen CLIP Models are Efficient Video Learners
Viaarxiv icon

Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models

Add code
Jun 10, 2022
Viaarxiv icon

Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation

Add code
Mar 01, 2022
Figure 1 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Figure 2 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Figure 3 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Figure 4 for Fast-R2D2: A Pretrained Recursive Neural Network based on Pruned CKY for Grammar Induction and Text Representation
Viaarxiv icon

Art Creation with Multi-Conditional StyleGANs

Add code
Feb 23, 2022
Figure 1 for Art Creation with Multi-Conditional StyleGANs
Figure 2 for Art Creation with Multi-Conditional StyleGANs
Figure 3 for Art Creation with Multi-Conditional StyleGANs
Figure 4 for Art Creation with Multi-Conditional StyleGANs
Viaarxiv icon

Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?

Add code
Dec 27, 2021
Figure 1 for Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?
Figure 2 for Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?
Figure 3 for Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?
Figure 4 for Does CLIP Benefit Visual Question Answering in the Medical Domain as Much as it Does in the General Domain?
Viaarxiv icon

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

Add code
Dec 06, 2021
Figure 1 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 2 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 3 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 4 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Viaarxiv icon

Dense Contrastive Visual-Linguistic Pretraining

Add code
Sep 24, 2021
Figure 1 for Dense Contrastive Visual-Linguistic Pretraining
Figure 2 for Dense Contrastive Visual-Linguistic Pretraining
Figure 3 for Dense Contrastive Visual-Linguistic Pretraining
Figure 4 for Dense Contrastive Visual-Linguistic Pretraining
Viaarxiv icon