Alert button
Picture for James Henderson

James Henderson

Alert button

What Do Compressed Multilingual Machine Translation Models Forget?

Add code
Bookmark button
Alert button
May 22, 2022
Alireza Mohammadshahi, Vassilina Nikoulina, Alexandre Berard, Caroline Brun, James Henderson, Laurent Besacier

Figure 1 for What Do Compressed Multilingual Machine Translation Models Forget?
Figure 2 for What Do Compressed Multilingual Machine Translation Models Forget?
Figure 3 for What Do Compressed Multilingual Machine Translation Models Forget?
Figure 4 for What Do Compressed Multilingual Machine Translation Models Forget?
Viaarxiv icon

PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models

Add code
Bookmark button
Alert button
Apr 03, 2022
Rabeeh Karimi Mahabadi, Luke Zettlemoyer, James Henderson, Marzieh Saeidi, Lambert Mathias, Veselin Stoyanov, Majid Yazdani

Figure 1 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Figure 2 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Figure 3 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Figure 4 for PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Viaarxiv icon

Graph Refinement for Coreference Resolution

Add code
Bookmark button
Alert button
Mar 30, 2022
Lesly Miculicich, James Henderson

Figure 1 for Graph Refinement for Coreference Resolution
Figure 2 for Graph Refinement for Coreference Resolution
Figure 3 for Graph Refinement for Coreference Resolution
Figure 4 for Graph Refinement for Coreference Resolution
Viaarxiv icon

HyperMixer: An MLP-based Green AI Alternative to Transformers

Add code
Bookmark button
Alert button
Mar 07, 2022
Florian Mai, Arnaud Pannatier, Fabio Fehr, Haolin Chen, Francois Marelli, Francois Fleuret, James Henderson

Figure 1 for HyperMixer: An MLP-based Green AI Alternative to Transformers
Figure 2 for HyperMixer: An MLP-based Green AI Alternative to Transformers
Figure 3 for HyperMixer: An MLP-based Green AI Alternative to Transformers
Figure 4 for HyperMixer: An MLP-based Green AI Alternative to Transformers
Viaarxiv icon

Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation

Add code
Bookmark button
Alert button
Oct 13, 2021
Florian Mai, James Henderson

Figure 1 for Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation
Figure 2 for Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation
Figure 3 for Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation
Figure 4 for Bag-of-Vectors Autoencoders for Unsupervised Conditional Text Generation
Viaarxiv icon

Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning

Add code
Bookmark button
Alert button
Sep 04, 2021
Christos Theodoropoulos, James Henderson, Andrei C. Coman, Marie-Francine Moens

Figure 1 for Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning
Figure 2 for Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning
Figure 3 for Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning
Figure 4 for Imposing Relation Structure in Language-Model Embeddings Using Contrastive Learning
Viaarxiv icon

The DCU-EPFL Enhanced Dependency Parser at the IWPT 2021 Shared Task

Add code
Bookmark button
Alert button
Jul 05, 2021
James Barry, Alireza Mohammadshahi, Joachim Wagner, Jennifer Foster, James Henderson

Figure 1 for The DCU-EPFL Enhanced Dependency Parser at the IWPT 2021 Shared Task
Figure 2 for The DCU-EPFL Enhanced Dependency Parser at the IWPT 2021 Shared Task
Figure 3 for The DCU-EPFL Enhanced Dependency Parser at the IWPT 2021 Shared Task
Figure 4 for The DCU-EPFL Enhanced Dependency Parser at the IWPT 2021 Shared Task
Viaarxiv icon

Variational Information Bottleneck for Effective Low-Resource Fine-Tuning

Add code
Bookmark button
Alert button
Jun 10, 2021
Rabeeh Karimi Mahabadi, Yonatan Belinkov, James Henderson

Figure 1 for Variational Information Bottleneck for Effective Low-Resource Fine-Tuning
Figure 2 for Variational Information Bottleneck for Effective Low-Resource Fine-Tuning
Figure 3 for Variational Information Bottleneck for Effective Low-Resource Fine-Tuning
Figure 4 for Variational Information Bottleneck for Effective Low-Resource Fine-Tuning
Viaarxiv icon

Compacter: Efficient Low-Rank Hypercomplex Adapter Layers

Add code
Bookmark button
Alert button
Jun 08, 2021
Rabeeh Karimi Mahabadi, James Henderson, Sebastian Ruder

Figure 1 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Figure 2 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Figure 3 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Figure 4 for Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
Viaarxiv icon

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

Add code
Bookmark button
Alert button
Jun 08, 2021
Rabeeh Karimi Mahabadi, Sebastian Ruder, Mostafa Dehghani, James Henderson

Figure 1 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Figure 2 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Figure 3 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Figure 4 for Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
Viaarxiv icon