Picture for Alaaeldin El-Nouby

Alaaeldin El-Nouby

DataComp-LM: In search of the next generation of training sets for language models

Add code
Jun 18, 2024
Viaarxiv icon

Scalable Pre-training of Large Autoregressive Image Models

Add code
Jan 16, 2024
Figure 1 for Scalable Pre-training of Large Autoregressive Image Models
Figure 2 for Scalable Pre-training of Large Autoregressive Image Models
Figure 3 for Scalable Pre-training of Large Autoregressive Image Models
Figure 4 for Scalable Pre-training of Large Autoregressive Image Models
Viaarxiv icon

ImageBind: One Embedding Space To Bind Them All

Add code
May 09, 2023
Figure 1 for ImageBind: One Embedding Space To Bind Them All
Figure 2 for ImageBind: One Embedding Space To Bind Them All
Figure 3 for ImageBind: One Embedding Space To Bind Them All
Figure 4 for ImageBind: One Embedding Space To Bind Them All
Viaarxiv icon

DINOv2: Learning Robust Visual Features without Supervision

Add code
Apr 14, 2023
Figure 1 for DINOv2: Learning Robust Visual Features without Supervision
Figure 2 for DINOv2: Learning Robust Visual Features without Supervision
Figure 3 for DINOv2: Learning Robust Visual Features without Supervision
Figure 4 for DINOv2: Learning Robust Visual Features without Supervision
Viaarxiv icon

Are Visual Recognition Models Robust to Image Compression?

Add code
Apr 10, 2023
Figure 1 for Are Visual Recognition Models Robust to Image Compression?
Figure 2 for Are Visual Recognition Models Robust to Image Compression?
Figure 3 for Are Visual Recognition Models Robust to Image Compression?
Viaarxiv icon

Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models

Add code
Jan 28, 2023
Figure 1 for Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models
Figure 2 for Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models
Figure 3 for Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models
Figure 4 for Improving Statistical Fidelity for Neural Image Compression with Implicit Local Likelihood Models
Viaarxiv icon

Image Compression with Product Quantized Masked Image Modeling

Add code
Dec 14, 2022
Figure 1 for Image Compression with Product Quantized Masked Image Modeling
Figure 2 for Image Compression with Product Quantized Masked Image Modeling
Figure 3 for Image Compression with Product Quantized Masked Image Modeling
Figure 4 for Image Compression with Product Quantized Masked Image Modeling
Viaarxiv icon

OmniMAE: Single Model Masked Pretraining on Images and Videos

Add code
Jun 16, 2022
Figure 1 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 2 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 3 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 4 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Viaarxiv icon

Three things everyone should know about Vision Transformers

Add code
Mar 18, 2022
Figure 1 for Three things everyone should know about Vision Transformers
Figure 2 for Three things everyone should know about Vision Transformers
Figure 3 for Three things everyone should know about Vision Transformers
Figure 4 for Three things everyone should know about Vision Transformers
Viaarxiv icon

Augmenting Convolutional networks with attention-based aggregation

Add code
Dec 27, 2021
Figure 1 for Augmenting Convolutional networks with attention-based aggregation
Figure 2 for Augmenting Convolutional networks with attention-based aggregation
Figure 3 for Augmenting Convolutional networks with attention-based aggregation
Figure 4 for Augmenting Convolutional networks with attention-based aggregation
Viaarxiv icon