Picture for Armand Joulin

Armand Joulin

INRIA - Ecole Normale Superieure

PaSS: Parallel Speculative Sampling

Add code
Nov 22, 2023
Viaarxiv icon

ImageBind: One Embedding Space To Bind Them All

Add code
May 09, 2023
Viaarxiv icon

DINOv2: Learning Robust Visual Features without Supervision

Add code
Apr 14, 2023
Figure 1 for DINOv2: Learning Robust Visual Features without Supervision
Figure 2 for DINOv2: Learning Robust Visual Features without Supervision
Figure 3 for DINOv2: Learning Robust Visual Features without Supervision
Figure 4 for DINOv2: Learning Robust Visual Features without Supervision
Viaarxiv icon

The effectiveness of MAE pre-pretraining for billion-scale pretraining

Add code
Mar 23, 2023
Figure 1 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Figure 2 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Figure 3 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Figure 4 for The effectiveness of MAE pre-pretraining for billion-scale pretraining
Viaarxiv icon

LLaMA: Open and Efficient Foundation Language Models

Add code
Feb 27, 2023
Figure 1 for LLaMA: Open and Efficient Foundation Language Models
Figure 2 for LLaMA: Open and Efficient Foundation Language Models
Figure 3 for LLaMA: Open and Efficient Foundation Language Models
Figure 4 for LLaMA: Open and Efficient Foundation Language Models
Viaarxiv icon

Few-shot Learning with Retrieval Augmented Language Models

Add code
Aug 08, 2022
Figure 1 for Few-shot Learning with Retrieval Augmented Language Models
Figure 2 for Few-shot Learning with Retrieval Augmented Language Models
Figure 3 for Few-shot Learning with Retrieval Augmented Language Models
Figure 4 for Few-shot Learning with Retrieval Augmented Language Models
Viaarxiv icon

Improving Wikipedia Verifiability with AI

Add code
Jul 08, 2022
Figure 1 for Improving Wikipedia Verifiability with AI
Figure 2 for Improving Wikipedia Verifiability with AI
Figure 3 for Improving Wikipedia Verifiability with AI
Figure 4 for Improving Wikipedia Verifiability with AI
Viaarxiv icon

OmniMAE: Single Model Masked Pretraining on Images and Videos

Add code
Jun 16, 2022
Figure 1 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 2 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 3 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Figure 4 for OmniMAE: Single Model Masked Pretraining on Images and Videos
Viaarxiv icon

Masked Siamese Networks for Label-Efficient Learning

Add code
Apr 14, 2022
Figure 1 for Masked Siamese Networks for Label-Efficient Learning
Figure 2 for Masked Siamese Networks for Label-Efficient Learning
Figure 3 for Masked Siamese Networks for Label-Efficient Learning
Figure 4 for Masked Siamese Networks for Label-Efficient Learning
Viaarxiv icon

Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision

Add code
Feb 22, 2022
Figure 1 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 2 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 3 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Figure 4 for Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision
Viaarxiv icon