Picture for Victor Sanh

Victor Sanh

Block Pruning For Faster Transformers

Add code
Sep 10, 2021
Figure 1 for Block Pruning For Faster Transformers
Figure 2 for Block Pruning For Faster Transformers
Figure 3 for Block Pruning For Faster Transformers
Figure 4 for Block Pruning For Faster Transformers
Viaarxiv icon

Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning

Add code
Sep 09, 2021
Figure 1 for Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Figure 2 for Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Figure 3 for Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Figure 4 for Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning
Viaarxiv icon

Datasets: A Community Library for Natural Language Processing

Add code
Sep 07, 2021
Figure 1 for Datasets: A Community Library for Natural Language Processing
Figure 2 for Datasets: A Community Library for Natural Language Processing
Figure 3 for Datasets: A Community Library for Natural Language Processing
Viaarxiv icon

Low-Complexity Probing via Finding Subnetworks

Add code
Apr 08, 2021
Figure 1 for Low-Complexity Probing via Finding Subnetworks
Figure 2 for Low-Complexity Probing via Finding Subnetworks
Figure 3 for Low-Complexity Probing via Finding Subnetworks
Figure 4 for Low-Complexity Probing via Finding Subnetworks
Viaarxiv icon

Learning from others' mistakes: Avoiding dataset biases without modeling them

Add code
Dec 02, 2020
Figure 1 for Learning from others' mistakes: Avoiding dataset biases without modeling them
Figure 2 for Learning from others' mistakes: Avoiding dataset biases without modeling them
Figure 3 for Learning from others' mistakes: Avoiding dataset biases without modeling them
Figure 4 for Learning from others' mistakes: Avoiding dataset biases without modeling them
Viaarxiv icon

EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP

Add code
Dec 01, 2020
Figure 1 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 2 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 3 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Figure 4 for EdgeBERT: Optimizing On-Chip Inference for Multi-Task NLP
Viaarxiv icon

Movement Pruning: Adaptive Sparsity by Fine-Tuning

Add code
May 15, 2020
Figure 1 for Movement Pruning: Adaptive Sparsity by Fine-Tuning
Figure 2 for Movement Pruning: Adaptive Sparsity by Fine-Tuning
Figure 3 for Movement Pruning: Adaptive Sparsity by Fine-Tuning
Figure 4 for Movement Pruning: Adaptive Sparsity by Fine-Tuning
Viaarxiv icon

HuggingFace's Transformers: State-of-the-art Natural Language Processing

Add code
Oct 16, 2019
Figure 1 for HuggingFace's Transformers: State-of-the-art Natural Language Processing
Viaarxiv icon

DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter

Add code
Oct 16, 2019
Figure 1 for DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Figure 2 for DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Figure 3 for DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Viaarxiv icon

TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents

Add code
Feb 04, 2019
Figure 1 for TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents
Figure 2 for TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents
Figure 3 for TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents
Figure 4 for TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents
Viaarxiv icon