Picture for Iulia Turc

Iulia Turc

Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding

Add code
Oct 07, 2022
Figure 1 for Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding
Figure 2 for Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding
Figure 3 for Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding
Figure 4 for Pix2Struct: Screenshot Parsing as Pretraining for Visual Language Understanding
Viaarxiv icon

Measuring Attribution in Natural Language Generation Models

Add code
Dec 23, 2021
Figure 1 for Measuring Attribution in Natural Language Generation Models
Figure 2 for Measuring Attribution in Natural Language Generation Models
Figure 3 for Measuring Attribution in Natural Language Generation Models
Figure 4 for Measuring Attribution in Natural Language Generation Models
Viaarxiv icon

Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer

Add code
Jun 30, 2021
Figure 1 for Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer
Figure 2 for Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer
Figure 3 for Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer
Figure 4 for Revisiting the Primacy of English in Zero-shot Cross-lingual Transfer
Viaarxiv icon

The MultiBERTs: BERT Reproductions for Robustness Analysis

Add code
Jun 30, 2021
Figure 1 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Figure 2 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Figure 3 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Figure 4 for The MultiBERTs: BERT Reproductions for Robustness Analysis
Viaarxiv icon

CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation

Add code
Mar 31, 2021
Figure 1 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Figure 2 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Figure 3 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Figure 4 for CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation
Viaarxiv icon

Well-Read Students Learn Better: On the Importance of Pre-training Compact Models

Add code
Sep 25, 2019
Figure 1 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Figure 2 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Figure 3 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Figure 4 for Well-Read Students Learn Better: On the Importance of Pre-training Compact Models
Viaarxiv icon