Alert button
Picture for Florian Bordes

Florian Bordes

Alert button

A Picture is Worth More Than 77 Text Tokens: Evaluating CLIP-Style Models on Dense Captions

Add code
Bookmark button
Alert button
Dec 14, 2023
Jack Urbanek, Florian Bordes, Pietro Astolfi, Mary Williamson, Vasu Sharma, Adriana Romero-Soriano

Viaarxiv icon

Feedback-guided Data Synthesis for Imbalanced Classification

Add code
Bookmark button
Alert button
Sep 29, 2023
Reyhane Askari Hemmat, Mohammad Pezeshki, Florian Bordes, Michal Drozdzal, Adriana Romero-Soriano

Viaarxiv icon

PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning

Add code
Bookmark button
Alert button
Aug 08, 2023
Florian Bordes, Shashank Shekhar, Mark Ibrahim, Diane Bouchacourt, Pascal Vincent, Ari S. Morcos

Figure 1 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
Figure 2 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
Figure 3 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
Figure 4 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
Viaarxiv icon

Predicting masked tokens in stochastic locations improves masked image modeling

Add code
Bookmark button
Alert button
Jul 31, 2023
Amir Bar, Florian Bordes, Assaf Shocher, Mahmoud Assran, Pascal Vincent, Nicolas Ballas, Trevor Darrell, Amir Globerson, Yann LeCun

Figure 1 for Predicting masked tokens in stochastic locations improves masked image modeling
Figure 2 for Predicting masked tokens in stochastic locations improves masked image modeling
Figure 3 for Predicting masked tokens in stochastic locations improves masked image modeling
Figure 4 for Predicting masked tokens in stochastic locations improves masked image modeling
Viaarxiv icon

Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning

Add code
Bookmark button
Alert button
Apr 28, 2023
Casey Meehan, Florian Bordes, Pascal Vincent, Kamalika Chaudhuri, Chuan Guo

Figure 1 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Figure 2 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Figure 3 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Figure 4 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Viaarxiv icon

Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations

Add code
Bookmark button
Alert button
Apr 25, 2023
Shashank Shekhar, Florian Bordes, Pascal Vincent, Ari Morcos

Figure 1 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Figure 2 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Figure 3 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Figure 4 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Viaarxiv icon

A Cookbook of Self-Supervised Learning

Add code
Bookmark button
Alert button
Apr 24, 2023
Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, Micah Goldblum

Figure 1 for A Cookbook of Self-Supervised Learning
Figure 2 for A Cookbook of Self-Supervised Learning
Figure 3 for A Cookbook of Self-Supervised Learning
Figure 4 for A Cookbook of Self-Supervised Learning
Viaarxiv icon

A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation

Add code
Bookmark button
Alert button
Apr 11, 2023
Florian Bordes, Samuel Lavoie, Randall Balestriero, Nicolas Ballas, Pascal Vincent

Figure 1 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Figure 2 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Figure 3 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Figure 4 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Viaarxiv icon

Towards Democratizing Joint-Embedding Self-Supervised Learning

Add code
Bookmark button
Alert button
Mar 03, 2023
Florian Bordes, Randall Balestriero, Pascal Vincent

Figure 1 for Towards Democratizing Joint-Embedding Self-Supervised Learning
Figure 2 for Towards Democratizing Joint-Embedding Self-Supervised Learning
Figure 3 for Towards Democratizing Joint-Embedding Self-Supervised Learning
Figure 4 for Towards Democratizing Joint-Embedding Self-Supervised Learning
Viaarxiv icon