Alert button
Picture for Albert Gatt

Albert Gatt

Alert button

ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models

Add code
Bookmark button
Alert button
Nov 13, 2023
Ilker Kesen, Andrea Pedrotti, Mustafa Dogan, Michele Cafagna, Emre Can Acikgoz, Letitia Parcalabescu, Iacer Calixto, Anette Frank, Albert Gatt, Aykut Erdem, Erkut Erdem

Viaarxiv icon

FTFT: efficient and robust Fine-Tuning by transFerring Training dynamics

Add code
Bookmark button
Alert button
Oct 10, 2023
Yupei Du, Albert Gatt, Dong Nguyen

Viaarxiv icon

The Scenario Refiner: Grounding subjects in images at the morphological level

Add code
Bookmark button
Alert button
Sep 20, 2023
Claudia Tagliaferri, Sofia Axioti, Albert Gatt, Denis Paperno

Viaarxiv icon

Contrast Is All You Need

Add code
Bookmark button
Alert button
Jul 06, 2023
Burak Kilic, Florix Bex, Albert Gatt

Figure 1 for Contrast Is All You Need
Figure 2 for Contrast Is All You Need
Figure 3 for Contrast Is All You Need
Figure 4 for Contrast Is All You Need
Viaarxiv icon

Interpreting Vision and Language Generative Models with Semantic Visual Priors

Add code
Bookmark button
Alert button
May 04, 2023
Michele Cafagna, Lina M. Rojas-Barahona, Kees van Deemter, Albert Gatt

Figure 1 for Interpreting Vision and Language Generative Models with Semantic Visual Priors
Figure 2 for Interpreting Vision and Language Generative Models with Semantic Visual Priors
Figure 3 for Interpreting Vision and Language Generative Models with Semantic Visual Priors
Figure 4 for Interpreting Vision and Language Generative Models with Semantic Visual Priors
Viaarxiv icon

Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP

Add code
Bookmark button
Alert button
May 02, 2023
Anya Belz, Craig Thomson, Ehud Reiter, Gavin Abercrombie, Jose M. Alonso-Moral, Mohammad Arvan, Jackie Cheung, Mark Cieliebak, Elizabeth Clark, Kees van Deemter, Tanvi Dinkar, Ondřej Dušek, Steffen Eger, Qixiang Fang, Albert Gatt, Dimitra Gkatzia, Javier González-Corbelle, Dirk Hovy, Manuela Hürlimann, Takumi Ito, John D. Kelleher, Filip Klubicka, Huiyuan Lai, Chris van der Lee, Emiel van Miltenburg, Yiru Li, Saad Mahamood, Margot Mieskes, Malvina Nissim, Natalie Parde, Ondřej Plátek, Verena Rieser, Pablo Mosteiro Romero, Joel Tetreault, Antonio Toral, Xiaojun Wan, Leo Wanner, Lewis Watson, Diyi Yang

Figure 1 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 2 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 3 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Figure 4 for Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP
Viaarxiv icon

HL Dataset: Grounding High-Level Linguistic Concepts in Vision

Add code
Bookmark button
Alert button
Feb 23, 2023
Michele Cafagna, Kees van Deemter, Albert Gatt

Figure 1 for HL Dataset: Grounding High-Level Linguistic Concepts in Vision
Figure 2 for HL Dataset: Grounding High-Level Linguistic Concepts in Vision
Figure 3 for HL Dataset: Grounding High-Level Linguistic Concepts in Vision
Figure 4 for HL Dataset: Grounding High-Level Linguistic Concepts in Vision
Viaarxiv icon

Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions

Add code
Bookmark button
Alert button
Nov 10, 2022
Michele Cafagna, Kees van Deemter, Albert Gatt

Figure 1 for Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions
Figure 2 for Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions
Figure 3 for Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions
Figure 4 for Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions
Viaarxiv icon

Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese

Add code
Bookmark button
Alert button
May 26, 2022
Kurt Micallef, Albert Gatt, Marc Tanti, Lonneke van der Plas, Claudia Borg

Figure 1 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Figure 2 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Figure 3 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Figure 4 for Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese
Viaarxiv icon

Face2Text revisited: Improved data set and baseline results

Add code
Bookmark button
Alert button
May 24, 2022
Marc Tanti, Shaun Abdilla, Adrian Muscat, Claudia Borg, Reuben A. Farrugia, Albert Gatt

Figure 1 for Face2Text revisited: Improved data set and baseline results
Figure 2 for Face2Text revisited: Improved data set and baseline results
Figure 3 for Face2Text revisited: Improved data set and baseline results
Figure 4 for Face2Text revisited: Improved data set and baseline results
Viaarxiv icon