Picture for Donald Metzler

Donald Metzler

Retrieval-Enhanced Machine Learning

Add code
May 02, 2022
Figure 1 for Retrieval-Enhanced Machine Learning
Figure 2 for Retrieval-Enhanced Machine Learning
Viaarxiv icon

Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters

Add code
Apr 15, 2022
Figure 1 for Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters
Figure 2 for Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters
Figure 3 for Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters
Figure 4 for Stretching Sentence-pair NLI Models to Reason over Long Documents and Clusters
Viaarxiv icon

HyperPrompt: Prompt-based Task-Conditioning of Transformers

Add code
Mar 01, 2022
Figure 1 for HyperPrompt: Prompt-based Task-Conditioning of Transformers
Figure 2 for HyperPrompt: Prompt-based Task-Conditioning of Transformers
Figure 3 for HyperPrompt: Prompt-based Task-Conditioning of Transformers
Figure 4 for HyperPrompt: Prompt-based Task-Conditioning of Transformers
Viaarxiv icon

A New Generation of Perspective API: Efficient Multilingual Character-level Transformers

Add code
Feb 22, 2022
Figure 1 for A New Generation of Perspective API: Efficient Multilingual Character-level Transformers
Figure 2 for A New Generation of Perspective API: Efficient Multilingual Character-level Transformers
Figure 3 for A New Generation of Perspective API: Efficient Multilingual Character-level Transformers
Figure 4 for A New Generation of Perspective API: Efficient Multilingual Character-level Transformers
Viaarxiv icon

Transformer Memory as a Differentiable Search Index

Add code
Feb 16, 2022
Figure 1 for Transformer Memory as a Differentiable Search Index
Figure 2 for Transformer Memory as a Differentiable Search Index
Figure 3 for Transformer Memory as a Differentiable Search Index
Figure 4 for Transformer Memory as a Differentiable Search Index
Viaarxiv icon

Atomized Search Length: Beyond User Models

Add code
Jan 05, 2022
Figure 1 for Atomized Search Length: Beyond User Models
Figure 2 for Atomized Search Length: Beyond User Models
Figure 3 for Atomized Search Length: Beyond User Models
Figure 4 for Atomized Search Length: Beyond User Models
Viaarxiv icon

ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning

Add code
Nov 22, 2021
Figure 1 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 2 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 3 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Figure 4 for ExT5: Towards Extreme Multi-Task Scaling for Transfer Learning
Viaarxiv icon

Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers

Add code
Sep 22, 2021
Figure 1 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 2 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 3 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Figure 4 for Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers
Viaarxiv icon

The Benchmark Lottery

Add code
Jul 14, 2021
Figure 1 for The Benchmark Lottery
Figure 2 for The Benchmark Lottery
Figure 3 for The Benchmark Lottery
Figure 4 for The Benchmark Lottery
Viaarxiv icon

Charformer: Fast Character Transformers via Gradient-based Subword Tokenization

Add code
Jul 02, 2021
Figure 1 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Figure 2 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Figure 3 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Figure 4 for Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
Viaarxiv icon