Alert button
Picture for Florian Bordes

Florian Bordes

Alert button

PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning

Aug 08, 2023
Florian Bordes, Shashank Shekhar, Mark Ibrahim, Diane Bouchacourt, Pascal Vincent, Ari S. Morcos

Figure 1 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
Figure 2 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
Figure 3 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning
Figure 4 for PUG: Photorealistic and Semantically Controllable Synthetic Data for Representation Learning

Synthetic image datasets offer unmatched advantages for designing and evaluating deep neural networks: they make it possible to (i) render as many data samples as needed, (ii) precisely control each scene and yield granular ground truth labels (and captions), (iii) precisely control distribution shifts between training and testing to isolate variables of interest for sound experimentation. Despite such promise, the use of synthetic image data is still limited -- and often played down -- mainly due to their lack of realism. Most works therefore rely on datasets of real images, which have often been scraped from public images on the internet, and may have issues with regards to privacy, bias, and copyright, while offering little control over how objects precisely appear. In this work, we present a path to democratize the use of photorealistic synthetic data: we develop a new generation of interactive environments for representation learning research, that offer both controllability and realism. We use the Unreal Engine, a powerful game engine well known in the entertainment industry, to produce PUG (Photorealistic Unreal Graphics) environments and datasets for representation learning. In this paper, we demonstrate the potential of PUG to enable more rigorous evaluations of vision models.

Viaarxiv icon

Predicting masked tokens in stochastic locations improves masked image modeling

Jul 31, 2023
Amir Bar, Florian Bordes, Assaf Shocher, Mahmoud Assran, Pascal Vincent, Nicolas Ballas, Trevor Darrell, Amir Globerson, Yann LeCun

Figure 1 for Predicting masked tokens in stochastic locations improves masked image modeling
Figure 2 for Predicting masked tokens in stochastic locations improves masked image modeling
Figure 3 for Predicting masked tokens in stochastic locations improves masked image modeling
Figure 4 for Predicting masked tokens in stochastic locations improves masked image modeling

Self-supervised learning is a promising paradigm in deep learning that enables learning from unlabeled data by constructing pretext tasks that require learning useful representations. In natural language processing, the dominant pretext task has been masked language modeling (MLM), while in computer vision there exists an equivalent called Masked Image Modeling (MIM). However, MIM is challenging because it requires predicting semantic content in accurate locations. E.g, given an incomplete picture of a dog, we can guess that there is a tail, but we cannot determine its exact location. In this work, we propose FlexPredict, a stochastic model that addresses this challenge by incorporating location uncertainty into the model. Specifically, we condition the model on stochastic masked token positions to guide the model toward learning features that are more robust to location uncertainties. Our approach improves downstream performance on a range of tasks, e.g, compared to MIM baselines, FlexPredict boosts ImageNet linear probing by 1.6% with ViT-B and by 2.5% for semi-supervised video segmentation using ViT-L.

* Technical report 
Viaarxiv icon

Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning

Apr 28, 2023
Casey Meehan, Florian Bordes, Pascal Vincent, Kamalika Chaudhuri, Chuan Guo

Figure 1 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Figure 2 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Figure 3 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning
Figure 4 for Do SSL Models Have Déjà Vu? A Case of Unintended Memorization in Self-supervised Learning

Self-supervised learning (SSL) algorithms can produce useful image representations by learning to associate different parts of natural images with one another. However, when taken to the extreme, SSL models can unintendedly memorize specific parts in individual training samples rather than learning semantically meaningful associations. In this work, we perform a systematic study of the unintended memorization of image-specific information in SSL models -- which we refer to as d\'ej\`a vu memorization. Concretely, we show that given the trained model and a crop of a training image containing only the background (e.g., water, sky, grass), it is possible to infer the foreground object with high accuracy or even visually reconstruct it. Furthermore, we show that d\'ej\`a vu memorization is common to different SSL algorithms, is exacerbated by certain design choices, and cannot be detected by conventional techniques for evaluating representation quality. Our study of d\'ej\`a vu memorization reveals previously unknown privacy risks in SSL models, as well as suggests potential practical mitigation strategies. Code is available at https://github.com/facebookresearch/DejaVu.

Viaarxiv icon

Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations

Apr 25, 2023
Shashank Shekhar, Florian Bordes, Pascal Vincent, Ari Morcos

Figure 1 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Figure 2 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Figure 3 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Figure 4 for Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations

Joint-embedding based learning (e.g., SimCLR, MoCo, DINO) and reconstruction-based learning (e.g., BEiT, SimMIM, MAE) are the two leading paradigms for self-supervised learning of vision transformers, but they differ substantially in their transfer performance. Here, we aim to explain these differences by analyzing the impact of these objectives on the structure and transferability of the learned representations. Our analysis reveals that reconstruction-based learning features are significantly dissimilar to joint-embedding based learning features and that models trained with similar objectives learn similar features even across architectures. These differences arise early in the network and are primarily driven by attention and normalization layers. We find that joint-embedding features yield better linear probe transfer for classification because the different objectives drive different distributions of information and invariances in the learned representation. These differences explain opposite trends in transfer performance for downstream tasks that require spatial specificity in features. Finally, we address how fine-tuning changes reconstructive representations to enable better transfer, showing that fine-tuning re-organizes the information to be more similar to pre-trained joint embedding models.

Viaarxiv icon

A Cookbook of Self-Supervised Learning

Apr 24, 2023
Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Gregoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, Micah Goldblum

Figure 1 for A Cookbook of Self-Supervised Learning
Figure 2 for A Cookbook of Self-Supervised Learning
Figure 3 for A Cookbook of Self-Supervised Learning
Figure 4 for A Cookbook of Self-Supervised Learning

Self-supervised learning, dubbed the dark matter of intelligence, is a promising path to advance machine learning. Yet, much like cooking, training SSL methods is a delicate art with a high barrier to entry. While many components are familiar, successfully training a SSL method involves a dizzying set of choices from the pretext tasks to training hyper-parameters. Our goal is to lower the barrier to entry into SSL research by laying the foundations and latest SSL recipes in the style of a cookbook. We hope to empower the curious researcher to navigate the terrain of methods, understand the role of the various knobs, and gain the know-how required to explore how delicious SSL can be.

Viaarxiv icon

A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation

Apr 11, 2023
Florian Bordes, Samuel Lavoie, Randall Balestriero, Nicolas Ballas, Pascal Vincent

Figure 1 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Figure 2 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Figure 3 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation
Figure 4 for A surprisingly simple technique to control the pretraining bias for better transfer: Expand or Narrow your representation

Self-Supervised Learning (SSL) models rely on a pretext task to learn representations. Because this pretext task differs from the downstream tasks used to evaluate the performance of these models, there is an inherent misalignment or pretraining bias. A commonly used trick in SSL, shown to make deep networks more robust to such bias, is the addition of a small projector (usually a 2 or 3 layer multi-layer perceptron) on top of a backbone network during training. In contrast to previous work that studied the impact of the projector architecture, we here focus on a simpler, yet overlooked lever to control the information in the backbone representation. We show that merely changing its dimensionality -- by changing only the size of the backbone's very last block -- is a remarkably effective technique to mitigate the pretraining bias. It significantly improves downstream transfer performance for both Self-Supervised and Supervised pretrained models.

Viaarxiv icon

Towards Democratizing Joint-Embedding Self-Supervised Learning

Mar 03, 2023
Florian Bordes, Randall Balestriero, Pascal Vincent

Figure 1 for Towards Democratizing Joint-Embedding Self-Supervised Learning
Figure 2 for Towards Democratizing Joint-Embedding Self-Supervised Learning
Figure 3 for Towards Democratizing Joint-Embedding Self-Supervised Learning
Figure 4 for Towards Democratizing Joint-Embedding Self-Supervised Learning

Joint Embedding Self-Supervised Learning (JE-SSL) has seen rapid developments in recent years, due to its promise to effectively leverage large unlabeled data. The development of JE-SSL methods was driven primarily by the search for ever increasing downstream classification accuracies, using huge computational resources, and typically built upon insights and intuitions inherited from a close parent JE-SSL method. This has led unwittingly to numerous pre-conceived ideas that carried over across methods e.g. that SimCLR requires very large mini batches to yield competitive accuracies; that strong and computationally slow data augmentations are required. In this work, we debunk several such ill-formed a priori ideas in the hope to unleash the full potential of JE-SSL free of unnecessary limitations. In fact, when carefully evaluating performances across different downstream tasks and properly optimizing hyper-parameters of the methods, we most often -- if not always -- see that these widespread misconceptions do not hold. For example we show that it is possible to train SimCLR to learn useful representations, while using a single image patch as negative example, and simple Gaussian noise as the only data augmentation for the positive pair. Along these lines, in the hope to democratize JE-SSL and to allow researchers to easily make more extensive evaluations of their methods, we introduce an optimized PyTorch library for SSL.

Viaarxiv icon

The Hidden Uniform Cluster Prior in Self-Supervised Learning

Oct 13, 2022
Mahmoud Assran, Randall Balestriero, Quentin Duval, Florian Bordes, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Nicolas Ballas

Figure 1 for The Hidden Uniform Cluster Prior in Self-Supervised Learning
Figure 2 for The Hidden Uniform Cluster Prior in Self-Supervised Learning
Figure 3 for The Hidden Uniform Cluster Prior in Self-Supervised Learning
Figure 4 for The Hidden Uniform Cluster Prior in Self-Supervised Learning

A successful paradigm in representation learning is to perform self-supervised pretraining using tasks based on mini-batch statistics (e.g., SimCLR, VICReg, SwAV, MSN). We show that in the formulation of all these methods is an overlooked prior to learn features that enable uniform clustering of the data. While this prior has led to remarkably semantic representations when pretraining on class-balanced data, such as ImageNet, we demonstrate that it can hamper performance when pretraining on class-imbalanced data. By moving away from conventional uniformity priors and instead preferring power-law distributed feature clusters, we show that one can improve the quality of the learned representations on real-world class-imbalanced datasets. To demonstrate this, we develop an extension of the Masked Siamese Networks (MSN) method to support the use of arbitrary features priors.

Viaarxiv icon

Guillotine Regularization: Improving Deep Networks Generalization by Removing their Head

Jun 27, 2022
Florian Bordes, Randall Balestriero, Quentin Garrido, Adrien Bardes, Pascal Vincent

Figure 1 for Guillotine Regularization: Improving Deep Networks Generalization by Removing their Head
Figure 2 for Guillotine Regularization: Improving Deep Networks Generalization by Removing their Head
Figure 3 for Guillotine Regularization: Improving Deep Networks Generalization by Removing their Head
Figure 4 for Guillotine Regularization: Improving Deep Networks Generalization by Removing their Head

One unexpected technique that emerged in recent years consists in training a Deep Network (DN) with a Self-Supervised Learning (SSL) method, and using this network on downstream tasks but with its last few layers entirely removed. This usually skimmed-over trick is actually critical for SSL methods to display competitive performances. For example, on ImageNet classification, more than 30 points of percentage can be gained that way. This is a little vexing, as one would hope that the network layer at which invariance is explicitly enforced by the SSL criterion during training (the last layer) should be the one to use for best generalization performance downstream. But it seems not to be, and this study sheds some light on why. This trick, which we name Guillotine Regularization (GR), is in fact a generically applicable form of regularization that has also been used to improve generalization performance in transfer learning scenarios. In this work, through theory and experiments, we formalize GR and identify the underlying reasons behind its success in SSL methods. Our study shows that the use of this trick is essential to SSL performance for two main reasons: (i) improper data-augmentations to define the positive pairs used during training, and/or (ii) suboptimal selection of the hyper-parameters of the SSL loss.

Viaarxiv icon