Picture for Sebastian Ruder

Sebastian Ruder

NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis

Add code
Jan 28, 2022
Figure 1 for NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis
Figure 2 for NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis
Figure 3 for NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis
Figure 4 for NaijaSenti: A Nigerian Twitter Sentiment Corpus for Multilingual Sentiment Analysis
Viaarxiv icon

NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation

Add code
Dec 06, 2021
Figure 1 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 2 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 3 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Figure 4 for NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Viaarxiv icon

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

Add code
Nov 22, 2021
Figure 1 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 2 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 3 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 4 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Viaarxiv icon

Balancing Average and Worst-case Accuracy in Multitask Learning

Add code
Oct 12, 2021
Figure 1 for Balancing Average and Worst-case Accuracy in Multitask Learning
Figure 2 for Balancing Average and Worst-case Accuracy in Multitask Learning
Figure 3 for Balancing Average and Worst-case Accuracy in Multitask Learning
Figure 4 for Balancing Average and Worst-case Accuracy in Multitask Learning
Viaarxiv icon

FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding

Add code
Sep 27, 2021
Figure 1 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 2 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 3 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Figure 4 for FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
Viaarxiv icon

Efficient Test Time Adapter Ensembling for Low-resource Language Varieties

Add code
Sep 10, 2021
Figure 1 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Figure 2 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Figure 3 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Figure 4 for Efficient Test Time Adapter Ensembling for Low-resource Language Varieties
Viaarxiv icon

Charformer: Fast Character Transformers via Gradient-based Subword Tokenization

Add code
Jul 02, 2021
Figure 1 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Figure 2 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Figure 3 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Figure 4 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Viaarxiv icon

Compacter: Efficient Low-Rank Hypercomplex Adapter Layers

Add code
Jun 08, 2021
Figure 1 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Figure 2 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Figure 3 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Figure 4 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Viaarxiv icon

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

Add code
Jun 08, 2021
Figure 1 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Figure 2 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Figure 3 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Figure 4 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Viaarxiv icon

BERT memorisation and pitfalls in low-resource scenarios

Add code
Apr 16, 2021
Viaarxiv icon