Alert button
Picture for Mohammad Havaei

Mohammad Havaei

Alert button

Source-free Domain Adaptation Requires Penalized Diversity

Apr 12, 2023
Laya Rafiee Sevyeri, Ivaxi Sheth, Farhood Farahnak, Alexandre See, Samira Ebrahimi Kahou, Thomas Fevens, Mohammad Havaei

Figure 1 for Source-free Domain Adaptation Requires Penalized Diversity
Figure 2 for Source-free Domain Adaptation Requires Penalized Diversity
Figure 3 for Source-free Domain Adaptation Requires Penalized Diversity
Figure 4 for Source-free Domain Adaptation Requires Penalized Diversity

While neural networks are capable of achieving human-like performance in many tasks such as image classification, the impressive performance of each model is limited to its own dataset. Source-free domain adaptation (SFDA) was introduced to address knowledge transfer between different domains in the absence of source data, thus, increasing data privacy. Diversity in representation space can be vital to a model`s adaptability in varied and difficult domains. In unsupervised SFDA, the diversity is limited to learning a single hypothesis on the source or learning multiple hypotheses with a shared feature extractor. Motivated by the improved predictive performance of ensembles, we propose a novel unsupervised SFDA algorithm that promotes representational diversity through the use of separate feature extractors with Distinct Backbone Architectures (DBA). Although diversity in feature space is increased, the unconstrained mutual information (MI) maximization may potentially introduce amplification of weak hypotheses. Thus we introduce the Weak Hypothesis Penalization (WHP) regularizer as a mitigation strategy. Our work proposes Penalized Diversity (PD) where the synergy of DBA and WHP is applied to unsupervised source-free domain adaptation for covariate shift. In addition, PD is augmented with a weighted MI maximization objective for label distribution shift. Empirical results on natural, synthetic, and medical domains demonstrate the effectiveness of PD under different distributional shifts.

Viaarxiv icon

Pitfalls of Conditional Batch Normalization for Contextual Multi-Modal Learning

Nov 28, 2022
Ivaxi Sheth, Aamer Abdul Rahman, Mohammad Havaei, Samira Ebrahimi Kahou

Figure 1 for Pitfalls of Conditional Batch Normalization for Contextual Multi-Modal Learning
Figure 2 for Pitfalls of Conditional Batch Normalization for Contextual Multi-Modal Learning
Figure 3 for Pitfalls of Conditional Batch Normalization for Contextual Multi-Modal Learning
Figure 4 for Pitfalls of Conditional Batch Normalization for Contextual Multi-Modal Learning

Humans have perfected the art of learning from multiple modalities through sensory organs. Despite their impressive predictive performance on a single modality, neural networks cannot reach human level accuracy with respect to multiple modalities. This is a particularly challenging task due to variations in the structure of respective modalities. Conditional Batch Normalization (CBN) is a popular method that was proposed to learn contextual features to aid deep learning tasks. This technique uses auxiliary data to improve representational power by learning affine transformations for convolutional neural networks. Despite the boost in performance observed by using CBN layers, our work reveals that the visual features learned by introducing auxiliary data via CBN deteriorates. We perform comprehensive experiments to evaluate the brittleness of CBN networks to various datasets, suggesting that learning from visual features alone could often be superior for generalization. We evaluate CBN models on natural images for bird classification and histology images for cancer type classification. We observe that the CBN network learns close to no visual features on the bird classification dataset and partial visual features on the histology dataset. Our extensive experiments reveal that CBN may encourage shortcut learning between the auxiliary data and labels.

* Accepted at ICBINB workshop @ NeurIPS 2022 
Viaarxiv icon

FL Games: A Federated Learning Framework for Distribution Shifts

Oct 31, 2022
Sharut Gupta, Kartik Ahuja, Mohammad Havaei, Niladri Chatterjee, Yoshua Bengio

Figure 1 for FL Games: A Federated Learning Framework for Distribution Shifts
Figure 2 for FL Games: A Federated Learning Framework for Distribution Shifts
Figure 3 for FL Games: A Federated Learning Framework for Distribution Shifts
Figure 4 for FL Games: A Federated Learning Framework for Distribution Shifts

Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server. However, participating clients typically each hold data from a different distribution, which can yield to catastrophic generalization on data from a different client, which represents a new domain. In this work, we argue that in order to generalize better across non-i.i.d. clients, it is imperative to only learn correlations that are stable and invariant across domains. We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients. While training to achieve the Nash equilibrium, the traditional best response strategy suffers from high-frequency oscillations. We demonstrate that FL GAMES effectively resolves this challenge and exhibits smooth performance curves. Further, FL GAMES scales well in the number of clients, requires significantly fewer communication rounds, and is agnostic to device heterogeneity. Through empirical evaluation, we demonstrate that FL GAMES achieves high out-of-distribution performance on various benchmarks.

* Accepted as ORAL at NeurIPS Workshop on Federated Learning: Recent Advances and New Challenges. arXiv admin note: text overlap with arXiv:2205.11101 
Viaarxiv icon

FHIST: A Benchmark for Few-shot Classification of Histological Images

May 31, 2022
Fereshteh Shakeri, Malik Boudiaf, Sina Mohammadi, Ivaxi Sheth, Mohammad Havaei, Ismail Ben Ayed, Samira Ebrahimi Kahou

Figure 1 for FHIST: A Benchmark for Few-shot Classification of Histological Images
Figure 2 for FHIST: A Benchmark for Few-shot Classification of Histological Images
Figure 3 for FHIST: A Benchmark for Few-shot Classification of Histological Images
Figure 4 for FHIST: A Benchmark for Few-shot Classification of Histological Images

Few-shot learning has recently attracted wide interest in image classification, but almost all the current public benchmarks are focused on natural images. The few-shot paradigm is highly relevant in medical-imaging applications due to the scarcity of labeled data, as annotations are expensive and require specialized expertise. However, in medical imaging, few-shot learning research is sparse, limited to private data sets and is at its early stage. In particular, the few-shot setting is of high interest in histology due to the diversity and fine granularity of cancer related tissue classification tasks, and the variety of data-preparation techniques. This paper introduces a highly diversified public benchmark, gathered from various public datasets, for few-shot histology data classification. We build few-shot tasks and base-training data with various tissue types, different levels of domain shifts stemming from various cancer sites, and different class-granularity levels, thereby reflecting realistic scenarios. We evaluate the performances of state-of-the-art few-shot learning methods on our benchmark, and observe that simple fine-tuning and regularization methods achieve better results than the popular meta-learning and episodic-training paradigm. Furthermore, we introduce three scenarios based on the domain shifts between the source and target histology data: near-domain, middle-domain and out-domain. Our experiments display the potential of few-shot learning in histology classification, with state-of-art few shot learning methods approaching the supervised-learning baselines in the near-domain setting. In our out-domain setting, for 5-way 5-shot, the best performing method reaches 60% accuracy. We believe that our work could help in building realistic evaluations and fair comparisons of few-shot learning methods and will further encourage research in the few-shot paradigm.

* Code available at: https://github.com/mboudiaf/Few-shot-histology 
Viaarxiv icon

FL Games: A federated learning framework for distribution shifts

May 23, 2022
Sharut Gupta, Kartik Ahuja, Mohammad Havaei, Niladri Chatterjee, Yoshua Bengio

Figure 1 for FL Games: A federated learning framework for distribution shifts
Figure 2 for FL Games: A federated learning framework for distribution shifts
Figure 3 for FL Games: A federated learning framework for distribution shifts
Figure 4 for FL Games: A federated learning framework for distribution shifts

Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server. However, participating clients typically each hold data from a different distribution, whereby predictive models with strong in-distribution generalization can fail catastrophically on unseen domains. In this work, we argue that in order to generalize better across non-i.i.d. clients, it is imperative to only learn correlations that are stable and invariant across domains. We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients. While training to achieve the Nash equilibrium, the traditional best response strategy suffers from high-frequency oscillations. We demonstrate that FL Games effectively resolves this challenge and exhibits smooth performance curves. Further, FL Games scales well in the number of clients, requires significantly fewer communication rounds, and is agnostic to device heterogeneity. Through empirical evaluation, we demonstrate that FL Games achieves high out-of-distribution performance on various benchmarks.

Viaarxiv icon

Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation

Apr 27, 2022
Farshid Varno, Marzie Saghayi, Laya Rafiee, Sharut Gupta, Stan Matwin, Mohammad Havaei

Figure 1 for Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation
Figure 2 for Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation
Figure 3 for Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation
Figure 4 for Minimizing Client Drift in Federated Learning via Adaptive Bias Estimation

In Federated Learning a number of clients collaborate to train a model without sharing their data. Client models are optimized locally and are communicated through a central hub called server. A major challenge is to deal with heterogeneity among clients' data which causes the local optimization to drift away with respect to the global objective. In order to estimate and therefore remove this drift, variance reduction techniques have been incorporated into Federated Learning optimization recently. However, the existing solutions propagate the error of their estimations, throughout the optimization trajectory which leads to inaccurate approximations of the clients' drift and ultimately failure to remove them properly. In this paper, we address this issue by introducing an adaptive algorithm that efficiently reduces clients' drift. Compared to the previous works on adapting variance reduction to Federated Learning, our approach uses less or the same level of communication bandwidth, computation or memory. Additionally, it addresses the instability problem--prevalent in prior work, caused by increasing norm of the estimates which makes our approach a much more practical solution for large scale Federated Learning settings. Our experimental results demonstrate that the proposed algorithm converges significantly faster and achieves higher accuracy compared to the baselines in an extensive set of Federated Learning benchmarks.

* AdaBest 
Viaarxiv icon

CT-SGAN: Computed Tomography Synthesis GAN

Nov 04, 2021
Ahmad Pesaranghader, Yiping Wang, Mohammad Havaei

Figure 1 for CT-SGAN: Computed Tomography Synthesis GAN
Figure 2 for CT-SGAN: Computed Tomography Synthesis GAN
Figure 3 for CT-SGAN: Computed Tomography Synthesis GAN
Figure 4 for CT-SGAN: Computed Tomography Synthesis GAN

Diversity in data is critical for the successful training of deep learning models. Leveraged by a recurrent generative adversarial network, we propose the CT-SGAN model that generates large-scale 3D synthetic CT-scan volumes ($\geq 224\times224\times224$) when trained on a small dataset of chest CT-scans. CT-SGAN offers an attractive solution to two major challenges facing machine learning in medical imaging: a small number of given i.i.d. training data, and the restrictions around the sharing of patient data preventing to rapidly obtain larger and more diverse datasets. We evaluate the fidelity of the generated images qualitatively and quantitatively using various metrics including Fr\'echet Inception Distance and Inception Score. We further show that CT-SGAN can significantly improve lung nodule detection accuracy by pre-training a classifier on a vast amount of synthetic data.

* In Proceedings of MICCAI Deep Generative Models workshop, October 2021 
Viaarxiv icon

Hypothesis Disparity Regularized Mutual Information Maximization

Dec 15, 2020
Qicheng Lao, Xiang Jiang, Mohammad Havaei

Figure 1 for Hypothesis Disparity Regularized Mutual Information Maximization
Figure 2 for Hypothesis Disparity Regularized Mutual Information Maximization
Figure 3 for Hypothesis Disparity Regularized Mutual Information Maximization
Figure 4 for Hypothesis Disparity Regularized Mutual Information Maximization

We propose a hypothesis disparity regularized mutual information maximization~(HDMI) approach to tackle unsupervised hypothesis transfer -- as an effort towards unifying hypothesis transfer learning (HTL) and unsupervised domain adaptation (UDA) -- where the knowledge from a source domain is transferred solely through hypotheses and adapted to the target domain in an unsupervised manner. In contrast to the prevalent HTL and UDA approaches that typically use a single hypothesis, HDMI employs multiple hypotheses to leverage the underlying distributions of the source and target hypotheses. To better utilize the crucial relationship among different hypotheses -- as opposed to unconstrained optimization of each hypothesis independently -- while adapting to the unlabeled target domain through mutual information maximization, HDMI incorporates a hypothesis disparity regularization that coordinates the target hypotheses jointly learn better target representations while preserving more transferable source knowledge with better-calibrated prediction uncertainty. HDMI achieves state-of-the-art adaptation performance on benchmark datasets for UDA in the context of HTL, without the need to access the source data during the adaptation.

* Accepted to AAAI 2021 
Viaarxiv icon

Conditional Generation of Medical Images via Disentangled Adversarial Inference

Dec 08, 2020
Mohammad Havaei, Ximeng Mao, Yiping Wang, Qicheng Lao

Figure 1 for Conditional Generation of Medical Images via Disentangled Adversarial Inference
Figure 2 for Conditional Generation of Medical Images via Disentangled Adversarial Inference
Figure 3 for Conditional Generation of Medical Images via Disentangled Adversarial Inference
Figure 4 for Conditional Generation of Medical Images via Disentangled Adversarial Inference

Synthetic medical image generation has a huge potential for improving healthcare through many applications, from data augmentation for training machine learning systems to preserving patient privacy. Conditional Adversarial Generative Networks (cGANs) use a conditioning factor to generate images and have shown great success in recent years. Intuitively, the information in an image can be divided into two parts: 1) content which is presented through the conditioning vector and 2) style which is the undiscovered information missing from the conditioning vector. Current practices in using cGANs for medical image generation, only use a single variable for image generation (i.e., content) and therefore, do not provide much flexibility nor control over the generated image. In this work we propose a methodology to learn from the image itself, disentangled representations of style and content, and use this information to impose control over the generation process. In this framework, style is learned in a fully unsupervised manner, while content is learned through both supervised learning (using the conditioning vector) and unsupervised learning (with the inference mechanism). We undergo two novel regularization steps to ensure content-style disentanglement. First, we minimize the shared information between content and style by introducing a novel application of the gradient reverse layer (GRL); second, we introduce a self-supervised regularization method to further separate information in the content and style variables. We show that in general, two latent variable models achieve better performance and give more control over the generated image. We also show that our proposed model (DRAI) achieves the best disentanglement score and has the best overall performance.

Viaarxiv icon