Picture for Roberto Dessì

Roberto Dessì

Cross-Domain Image Captioning with Discriminative Finetuning

Add code
Apr 04, 2023
Figure 1 for Cross-Domain Image Captioning with Discriminative Finetuning
Figure 2 for Cross-Domain Image Captioning with Discriminative Finetuning
Figure 3 for Cross-Domain Image Captioning with Discriminative Finetuning
Figure 4 for Cross-Domain Image Captioning with Discriminative Finetuning
Viaarxiv icon

Can discrete information extraction prompts generalize across language models?

Add code
Mar 07, 2023
Figure 1 for Can discrete information extraction prompts generalize across language models?
Figure 2 for Can discrete information extraction prompts generalize across language models?
Figure 3 for Can discrete information extraction prompts generalize across language models?
Figure 4 for Can discrete information extraction prompts generalize across language models?
Viaarxiv icon

Referential communication in heterogeneous communities of pre-trained visual deep networks

Add code
Feb 20, 2023
Figure 1 for Referential communication in heterogeneous communities of pre-trained visual deep networks
Figure 2 for Referential communication in heterogeneous communities of pre-trained visual deep networks
Figure 3 for Referential communication in heterogeneous communities of pre-trained visual deep networks
Figure 4 for Referential communication in heterogeneous communities of pre-trained visual deep networks
Viaarxiv icon

Augmented Language Models: a Survey

Add code
Feb 15, 2023
Figure 1 for Augmented Language Models: a Survey
Figure 2 for Augmented Language Models: a Survey
Figure 3 for Augmented Language Models: a Survey
Figure 4 for Augmented Language Models: a Survey
Viaarxiv icon

Toolformer: Language Models Can Teach Themselves to Use Tools

Add code
Feb 09, 2023
Figure 1 for Toolformer: Language Models Can Teach Themselves to Use Tools
Figure 2 for Toolformer: Language Models Can Teach Themselves to Use Tools
Figure 3 for Toolformer: Language Models Can Teach Themselves to Use Tools
Figure 4 for Toolformer: Language Models Can Teach Themselves to Use Tools
Viaarxiv icon

Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN

Add code
Jul 03, 2021
Figure 1 for Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN
Figure 2 for Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN
Figure 3 for Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN
Figure 4 for Can Transformers Jump Around Right in Natural Language? Assessing Performance Transfer from SCAN
Viaarxiv icon

Interpretable agent communication from scratch(with a generic visual processor emerging on the side)

Add code
Jun 08, 2021
Figure 1 for Interpretable agent communication from scratch(with a generic visual processor emerging on the side)
Figure 2 for Interpretable agent communication from scratch(with a generic visual processor emerging on the side)
Figure 3 for Interpretable agent communication from scratch(with a generic visual processor emerging on the side)
Figure 4 for Interpretable agent communication from scratch(with a generic visual processor emerging on the side)
Viaarxiv icon

Focus on What's Informative and Ignore What's not: Communication Strategies in a Referential Game

Add code
Nov 05, 2019
Figure 1 for Focus on What's Informative and Ignore What's not: Communication Strategies in a Referential Game
Viaarxiv icon

CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks

Add code
May 21, 2019
Figure 1 for CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Figure 2 for CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Figure 3 for CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Figure 4 for CNNs found to jump around more skillfully than RNNs: Compositional generalization in seq2seq convolutional networks
Viaarxiv icon

Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018

Add code
Oct 16, 2018
Figure 1 for Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018
Figure 2 for Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018
Figure 3 for Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018
Figure 4 for Fine-tuning on Clean Data for End-to-End Speech Translation: FBK @ IWSLT 2018
Viaarxiv icon