Alert button
Picture for Letitia Parcalabescu

Letitia Parcalabescu

Alert button

On Measuring Faithfulness of Natural Language Explanations

Add code
Bookmark button
Alert button
Nov 13, 2023
Letitia Parcalabescu, Anette Frank

Viaarxiv icon

ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models

Add code
Bookmark button
Alert button
Nov 13, 2023
Ilker Kesen, Andrea Pedrotti, Mustafa Dogan, Michele Cafagna, Emre Can Acikgoz, Letitia Parcalabescu, Iacer Calixto, Anette Frank, Albert Gatt, Aykut Erdem, Erkut Erdem

Viaarxiv icon

MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks

Add code
Bookmark button
Alert button
Dec 15, 2022
Letitia Parcalabescu, Anette Frank

Figure 1 for MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks
Figure 2 for MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks
Figure 3 for MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks
Figure 4 for MM-SHAP: A Performance-agnostic Metric for Measuring Multimodal Contributions in Vision and Language Models & Tasks
Viaarxiv icon

VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

Add code
Bookmark button
Alert button
Dec 14, 2021
Letitia Parcalabescu, Michele Cafagna, Lilitta Muradjan, Anette Frank, Iacer Calixto, Albert Gatt

Figure 1 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Figure 2 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Figure 3 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Figure 4 for VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena
Viaarxiv icon

MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning

Add code
Bookmark button
Alert button
Dec 09, 2021
Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, Anette Frank

Figure 1 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Figure 2 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Figure 3 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Figure 4 for MAGMA -- Multimodal Augmentation of Generative Models through Adapter-based Finetuning
Viaarxiv icon

What is Multimodality?

Add code
Bookmark button
Alert button
Mar 10, 2021
Letitia Parcalabescu, Nils Trost, Anette Frank

Figure 1 for What is Multimodality?
Figure 2 for What is Multimodality?
Figure 3 for What is Multimodality?
Viaarxiv icon

Seeing past words: Testing the cross-modal capabilities of pretrained V&L models

Add code
Bookmark button
Alert button
Dec 22, 2020
Letitia Parcalabescu, Albert Gatt, Anette Frank, Iacer Calixto

Figure 1 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Figure 2 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Figure 3 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Figure 4 for Seeing past words: Testing the cross-modal capabilities of pretrained V&L models
Viaarxiv icon

AMR Similarity Metrics from Principles

Add code
Bookmark button
Alert button
Jan 29, 2020
Juri Opitz, Letitia Parcalabescu, Anette Frank

Figure 1 for AMR Similarity Metrics from Principles
Figure 2 for AMR Similarity Metrics from Principles
Figure 3 for AMR Similarity Metrics from Principles
Figure 4 for AMR Similarity Metrics from Principles
Viaarxiv icon