Picture for Donald Metzler

Donald Metzler

SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption

Add code
Jun 29, 2021
Figure 1 for SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption
Figure 2 for SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption
Figure 3 for SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption
Figure 4 for SCARF: Self-Supervised Contrastive Learning using Random Feature Corruption
Viaarxiv icon

How Reliable are Model Diagnostics?

Add code
May 12, 2021
Figure 1 for How Reliable are Model Diagnostics?
Figure 2 for How Reliable are Model Diagnostics?
Figure 3 for How Reliable are Model Diagnostics?
Figure 4 for How Reliable are Model Diagnostics?
Viaarxiv icon

Are Pre-trained Convolutions Better than Pre-trained Transformers?

Add code
May 07, 2021
Figure 1 for Are Pre-trained Convolutions Better than Pre-trained Transformers?
Figure 2 for Are Pre-trained Convolutions Better than Pre-trained Transformers?
Figure 3 for Are Pre-trained Convolutions Better than Pre-trained Transformers?
Figure 4 for Are Pre-trained Convolutions Better than Pre-trained Transformers?
Viaarxiv icon

Rethinking Search: Making Experts out of Dilettantes

Add code
May 05, 2021
Figure 1 for Rethinking Search: Making Experts out of Dilettantes
Figure 2 for Rethinking Search: Making Experts out of Dilettantes
Figure 3 for Rethinking Search: Making Experts out of Dilettantes
Viaarxiv icon

OmniNet: Omnidirectional Representations from Transformers

Add code
Mar 01, 2021
Figure 1 for OmniNet: Omnidirectional Representations from Transformers
Figure 2 for OmniNet: Omnidirectional Representations from Transformers
Figure 3 for OmniNet: Omnidirectional Representations from Transformers
Figure 4 for OmniNet: Omnidirectional Representations from Transformers
Viaarxiv icon

Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection

Add code
Feb 09, 2021
Figure 1 for Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection
Figure 2 for Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection
Figure 3 for Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection
Figure 4 for Label Smoothed Embedding Hypothesis for Out-of-Distribution Detection
Viaarxiv icon

StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling

Add code
Dec 15, 2020
Figure 1 for StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling
Figure 2 for StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling
Figure 3 for StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling
Figure 4 for StructFormer: Joint Unsupervised Induction of Dependency and Constituency Structure from Masked Language Modeling
Viaarxiv icon

Long Range Arena: A Benchmark for Efficient Transformers

Add code
Nov 08, 2020
Figure 1 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 2 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 3 for Long Range Arena: A Benchmark for Efficient Transformers
Figure 4 for Long Range Arena: A Benchmark for Efficient Transformers
Viaarxiv icon

Surprise: Result List Truncation via Extreme Value Theory

Add code
Oct 19, 2020
Figure 1 for Surprise: Result List Truncation via Extreme Value Theory
Figure 2 for Surprise: Result List Truncation via Extreme Value Theory
Figure 3 for Surprise: Result List Truncation via Extreme Value Theory
Figure 4 for Surprise: Result List Truncation via Extreme Value Theory
Viaarxiv icon

Efficient Transformers: A Survey

Add code
Sep 16, 2020
Figure 1 for Efficient Transformers: A Survey
Figure 2 for Efficient Transformers: A Survey
Figure 3 for Efficient Transformers: A Survey
Figure 4 for Efficient Transformers: A Survey
Viaarxiv icon