Alert button
Picture for Rahim Entezari

Rahim Entezari

Alert button

DataComp: In search of the next generation of multimodal datasets

May 03, 2023
Samir Yitzhak Gadre, Gabriel Ilharco, Alex Fang, Jonathan Hayase, Georgios Smyrnis, Thao Nguyen, Ryan Marten, Mitchell Wortsman, Dhruba Ghosh, Jieyu Zhang, Eyal Orgad, Rahim Entezari, Giannis Daras, Sarah Pratt, Vivek Ramanujan, Yonatan Bitton, Kalyani Marathe, Stephen Mussmann, Richard Vencu, Mehdi Cherti, Ranjay Krishna, Pang Wei Koh, Olga Saukh, Alexander Ratner, Shuran Song, Hannaneh Hajishirzi, Ali Farhadi, Romain Beaumont, Sewoong Oh, Alex Dimakis, Jenia Jitsev, Yair Carmon, Vaishaal Shankar, Ludwig Schmidt

Figure 1 for DataComp: In search of the next generation of multimodal datasets
Figure 2 for DataComp: In search of the next generation of multimodal datasets
Figure 3 for DataComp: In search of the next generation of multimodal datasets
Figure 4 for DataComp: In search of the next generation of multimodal datasets

Large multimodal datasets have been instrumental in recent breakthroughs such as CLIP, Stable Diffusion, and GPT-4. At the same time, datasets rarely receive the same research attention as model architectures or training algorithms. To address this shortcoming in the machine learning ecosystem, we introduce DataComp, a benchmark where the training code is fixed and researchers innovate by proposing new training sets. We provide a testbed for dataset experiments centered around a new candidate pool of 12.8B image-text pairs from Common Crawl. Participants in our benchmark design new filtering techniques or curate new data sources and then evaluate their new dataset by running our standardized CLIP training code and testing on 38 downstream test sets. Our benchmark consists of multiple scales, with four candidate pool sizes and associated compute budgets ranging from 12.8M to 12.8B samples seen during training. This multi-scale design facilitates the study of scaling trends and makes the benchmark accessible to researchers with varying resources. Our baseline experiments show that the DataComp workflow is a promising way of improving multimodal datasets. We introduce DataComp-1B, a dataset created by applying a simple filtering algorithm to the 12.8B candidate pool. The resulting 1.4B subset enables training a CLIP ViT-L/14 from scratch to 79.2% zero-shot accuracy on ImageNet. Our new ViT-L/14 model outperforms a larger ViT-g/14 trained on LAION-2B by 0.7 percentage points while requiring 9x less training compute. We also outperform OpenAI's CLIP ViT-L/14 by 3.7 percentage points, which is trained with the same compute budget as our model. These gains highlight the potential for improving model performance by carefully curating training sets. We view DataComp-1B as only the first step and hope that DataComp paves the way toward the next generation of multimodal datasets.

Viaarxiv icon

The Role of Pre-training Data in Transfer Learning

Mar 01, 2023
Rahim Entezari, Mitchell Wortsman, Olga Saukh, M. Moein Shariatnia, Hanie Sedghi, Ludwig Schmidt

Figure 1 for The Role of Pre-training Data in Transfer Learning
Figure 2 for The Role of Pre-training Data in Transfer Learning
Figure 3 for The Role of Pre-training Data in Transfer Learning
Figure 4 for The Role of Pre-training Data in Transfer Learning

The transfer learning paradigm of model pre-training and subsequent fine-tuning produces high-accuracy models. While most studies recommend scaling the pre-training size to benefit most from transfer learning, a question remains: what data and method should be used for pre-training? We investigate the impact of pre-training data distribution on the few-shot and full fine-tuning performance using 3 pre-training methods (supervised, contrastive language-image and image-image), 7 pre-training datasets, and 9 downstream datasets. Through extensive controlled experiments, we find that the choice of the pre-training data source is essential for the few-shot transfer, but its role decreases as more data is made available for fine-tuning. Additionally, we explore the role of data curation and examine the trade-offs between label noise and the size of the pre-training dataset. We find that using 2000X more pre-training data from LAION can match the performance of supervised ImageNet pre-training. Furthermore, we investigate the effect of pre-training methods, comparing language-image contrastive vs. image-image contrastive, and find that the latter leads to better downstream accuracy

Viaarxiv icon

REPAIR: REnormalizing Permuted Activations for Interpolation Repair

Nov 15, 2022
Keller Jordan, Hanie Sedghi, Olga Saukh, Rahim Entezari, Behnam Neyshabur

Figure 1 for REPAIR: REnormalizing Permuted Activations for Interpolation Repair
Figure 2 for REPAIR: REnormalizing Permuted Activations for Interpolation Repair
Figure 3 for REPAIR: REnormalizing Permuted Activations for Interpolation Repair
Figure 4 for REPAIR: REnormalizing Permuted Activations for Interpolation Repair

In this paper we look into the conjecture of Entezari et al.(2021) which states that if the permutation invariance of neural networks is taken into account, then there is likely no loss barrier to the linear interpolation between SGD solutions. First, we observe that neuron alignment methods alone are insufficient to establish low-barrier linear connectivity between SGD solutions due to a phenomenon we call variance collapse: interpolated deep networks suffer a collapse in the variance of their activations, causing poor performance. Next, we propose REPAIR (REnormalizing Permuted Activations for Interpolation Repair) which mitigates variance collapse by rescaling the preactivations of such interpolated networks. We explore the interaction between our method and the choice of normalization layer, network width, and depth, and demonstrate that using REPAIR on top of neuron alignment methods leads to 60%-100% relative barrier reduction across a wide variety of architecture families and tasks. In particular, we report a 74% barrier reduction for ResNet50 on ImageNet and 90% barrier reduction for ResNet18 on CIFAR10.

Viaarxiv icon

Studying the impact of magnitude pruning on contrastive learning methods

Jul 01, 2022
Francesco Corti, Rahim Entezari, Sara Hooker, Davide Bacciu, Olga Saukh

Figure 1 for Studying the impact of magnitude pruning on contrastive learning methods
Figure 2 for Studying the impact of magnitude pruning on contrastive learning methods
Figure 3 for Studying the impact of magnitude pruning on contrastive learning methods
Figure 4 for Studying the impact of magnitude pruning on contrastive learning methods

We study the impact of different pruning techniques on the representation learned by deep neural networks trained with contrastive loss functions. Our work finds that at high sparsity levels, contrastive learning results in a higher number of misclassified examples relative to models trained with traditional cross-entropy loss. To understand this pronounced difference, we use metrics such as the number of PIEs (Hooker et al., 2019), Q-Score (Kalibhat et al., 2022), and PD-Score (Baldock et al., 2021) to measure the impact of pruning on the learned representation quality. Our analysis suggests the schedule of the pruning method implementation matters. We find that the negative impact of sparsity on the quality of the learned representation is the highest when pruning is introduced early on in the training phase.

Viaarxiv icon

Understanding the effect of sparsity on neural networks robustness

Jun 22, 2022
Lukas Timpl, Rahim Entezari, Hanie Sedghi, Behnam Neyshabur, Olga Saukh

Figure 1 for Understanding the effect of sparsity on neural networks robustness
Figure 2 for Understanding the effect of sparsity on neural networks robustness
Figure 3 for Understanding the effect of sparsity on neural networks robustness
Figure 4 for Understanding the effect of sparsity on neural networks robustness

This paper examines the impact of static sparsity on the robustness of a trained network to weight perturbations, data corruption, and adversarial examples. We show that, up to a certain sparsity achieved by increasing network width and depth while keeping the network capacity fixed, sparsified networks consistently match and often outperform their initially dense versions. Robustness and accuracy decline simultaneously for very high sparsity due to loose connectivity between network layers. Our findings show that a rapid robustness drop caused by network compression observed in the literature is due to a reduced network capacity rather than sparsity.

Viaarxiv icon

Deep Neural Network Pruning for Nuclei Instance Segmentation in Hematoxylin & Eosin-Stained Histological Images

Jun 15, 2022
Amirreza Mahbod, Rahim Entezari, Isabella Ellinger, Olga Saukh

Figure 1 for Deep Neural Network Pruning for Nuclei Instance Segmentation in Hematoxylin & Eosin-Stained Histological Images
Figure 2 for Deep Neural Network Pruning for Nuclei Instance Segmentation in Hematoxylin & Eosin-Stained Histological Images
Figure 3 for Deep Neural Network Pruning for Nuclei Instance Segmentation in Hematoxylin & Eosin-Stained Histological Images
Figure 4 for Deep Neural Network Pruning for Nuclei Instance Segmentation in Hematoxylin & Eosin-Stained Histological Images

Recently, pruning deep neural networks (DNNs) has received a lot of attention for improving accuracy and generalization power, reducing network size, and increasing inference speed on specialized hardwares. Although pruning was mainly tested on computer vision tasks, its application in the context of medical image analysis has hardly been explored. This work investigates the impact of well-known pruning techniques, namely layer-wise and network-wide magnitude pruning, on the nuclei instance segmentation performance in histological images. Our utilized instance segmentation model consists of two main branches: (1) a semantic segmentation branch, and (2) a deep regression branch. We investigate the impact of weight pruning on the performance of both branches separately and on the final nuclei instance segmentation result. Evaluated on two publicly available datasets, our results show that layer-wise pruning delivers slightly better performance than networkwide pruning for small compression ratios (CRs) while for large CRs, network-wide pruning yields superior performance. For semantic segmentation, deep regression and final instance segmentation, 93.75 %, 95 %, and 80 % of the model weights can be pruned by layer-wise pruning with less than 2 % reduction in the performance of respective models.

Viaarxiv icon

The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks

Oct 12, 2021
Rahim Entezari, Hanie Sedghi, Olga Saukh, Behnam Neyshabur

Figure 1 for The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
Figure 2 for The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
Figure 3 for The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks
Figure 4 for The Role of Permutation Invariance in Linear Mode Connectivity of Neural Networks

In this paper, we conjecture that if the permutation invariance of neural networks is taken into account, SGD solutions will likely have no barrier in the linear interpolation between them. Although it is a bold conjecture, we show how extensive empirical attempts fall short of refuting it. We further provide a preliminary theoretical result to support our conjecture. Our conjecture has implications for lottery ticket hypothesis, distributed training, and ensemble methods.

Viaarxiv icon

Class-dependent Compression of Deep Neural Networks

Sep 23, 2019
Rahim Entezari, Olga Saukh

Figure 1 for Class-dependent Compression of Deep Neural Networks
Figure 2 for Class-dependent Compression of Deep Neural Networks
Figure 3 for Class-dependent Compression of Deep Neural Networks
Figure 4 for Class-dependent Compression of Deep Neural Networks

Today's deep neural networks require substantial computation resources for their training, storage and inference, which limits their effective use on resource-constrained devices. On the one hand, many recent research activities explore different options of compressing and optimizing deep models. On the other hand, in many real-world applications we face the class imbalance problem, e.g. higher number of false positives produced by a compressed network may be tolerable, yet the number of false negatives must stay low. The problem originates from either an intrinsic nature of the imbalanced samples within the training data set, or from the fact that some classes are more important for the application domain of the model, e.g. in medical imaging. In this paper, we propose a class-dependent network compression method based on a newly introduced network pruning technique used to search for lottery tickets in an original deep network. We introduce a novel combined loss function to find efficient compressed sub-networks with the same or even lower number of false negatives compared to the original network. Our experimental evaluation using three benchmark data sets shows that the resulting compressed sub-networks achieve up to 50% lower number of false negatives and an overall higher AUC-ROC measure, yet use up to 99% fewer parameters compared to the original network.

Viaarxiv icon

AVID: Adversarial Visual Irregularity Detection

Jul 17, 2018
Mohammad Sabokrou, Masoud Pourreza, Mohsen Fayyaz, Rahim Entezari, Mahmood Fathy, Jürgen Gall, Ehsan Adeli

Figure 1 for AVID: Adversarial Visual Irregularity Detection
Figure 2 for AVID: Adversarial Visual Irregularity Detection
Figure 3 for AVID: Adversarial Visual Irregularity Detection
Figure 4 for AVID: Adversarial Visual Irregularity Detection

Real-time detection of irregularities in visual data is very invaluable and useful in many prospective applications including surveillance, patient monitoring systems, etc. With the surge of deep learning methods in the recent years, researchers have tried a wide spectrum of methods for different applications. However, for the case of irregularity or anomaly detection in videos, training an end-to-end model is still an open challenge, since often irregularity is not well-defined and there are not enough irregular samples to use during training. In this paper, inspired by the success of generative adversarial networks (GANs) for training deep models in unsupervised or self-supervised settings, we propose an end-to-end deep network for detection and fine localization of irregularities in videos (and images). Our proposed architecture is composed of two networks, which are trained in competing with each other while collaborating to find the irregularity. One network works as a pixel-level irregularity Inpainter, and the other works as a patch-level Detector. After an adversarial self-supervised training, in which I tries to fool D into accepting its inpainted output as regular (normal), the two networks collaborate to detect and fine-segment the irregularity in any given testing video. Our results on three different datasets show that our method can outperform the state-of-the-art and fine-segment the irregularity.

Viaarxiv icon