Picture for Lucas Beyer

Lucas Beyer

Dima

Kubric: A scalable dataset generator

Add code
Mar 07, 2022
Figure 1 for Kubric: A scalable dataset generator
Figure 2 for Kubric: A scalable dataset generator
Figure 3 for Kubric: A scalable dataset generator
Figure 4 for Kubric: A scalable dataset generator
Viaarxiv icon

A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation

Add code
Dec 17, 2021
Figure 1 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 2 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 3 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Figure 4 for A Simple Single-Scale Vision Transformer for Object Localization and Instance Segmentation
Viaarxiv icon

LiT: Zero-Shot Transfer with Locked-image Text Tuning

Add code
Nov 15, 2021
Figure 1 for LiT: Zero-Shot Transfer with Locked-image Text Tuning
Figure 2 for LiT: Zero-Shot Transfer with Locked-image Text Tuning
Figure 3 for LiT: Zero-Shot Transfer with Locked-image Text Tuning
Figure 4 for LiT: Zero-Shot Transfer with Locked-image Text Tuning
Viaarxiv icon

The Efficiency Misnomer

Add code
Oct 25, 2021
Figure 1 for The Efficiency Misnomer
Figure 2 for The Efficiency Misnomer
Figure 3 for The Efficiency Misnomer
Viaarxiv icon

How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers

Add code
Jun 18, 2021
Figure 1 for How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Figure 2 for How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Figure 3 for How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Figure 4 for How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Viaarxiv icon

Knowledge distillation: A good teacher is patient and consistent

Add code
Jun 09, 2021
Figure 1 for Knowledge distillation: A good teacher is patient and consistent
Figure 2 for Knowledge distillation: A good teacher is patient and consistent
Figure 3 for Knowledge distillation: A good teacher is patient and consistent
Figure 4 for Knowledge distillation: A good teacher is patient and consistent
Viaarxiv icon

Scaling Vision Transformers

Add code
Jun 08, 2021
Figure 1 for Scaling Vision Transformers
Figure 2 for Scaling Vision Transformers
Figure 3 for Scaling Vision Transformers
Figure 4 for Scaling Vision Transformers
Viaarxiv icon

MLP-Mixer: An all-MLP Architecture for Vision

Add code
May 17, 2021
Figure 1 for MLP-Mixer: An all-MLP Architecture for Vision
Figure 2 for MLP-Mixer: An all-MLP Architecture for Vision
Figure 3 for MLP-Mixer: An all-MLP Architecture for Vision
Figure 4 for MLP-Mixer: An all-MLP Architecture for Vision
Viaarxiv icon

SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size

Add code
Apr 09, 2021
Figure 1 for SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size
Figure 2 for SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size
Figure 3 for SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size
Figure 4 for SI-Score: An image dataset for fine-grained analysis of robustness to object location, rotation and size
Viaarxiv icon

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale

Add code
Oct 22, 2020
Figure 1 for An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Figure 2 for An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Figure 3 for An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Figure 4 for An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Viaarxiv icon