Alert button
Picture for Ismail Ben Ayed

Ismail Ben Ayed

Alert button

ETS

A transductive few-shot learning approach for classification of digital histopathological slides from liver cancer

Nov 29, 2023
Aymen Sadraoui, Ségolène Martin, Eliott Barbot, Astrid Laurent-Bellue, Jean-Christophe Pesquet, Catherine Guettier, Ismail Ben Ayed

This paper presents a new approach for classifying 2D histopathology patches using few-shot learning. The method is designed to tackle a significant challenge in histopathology, which is the limited availability of labeled data. By applying a sliding window technique to histopathology slides, we illustrate the practical benefits of transductive learning (i.e., making joint predictions on patches) to achieve consistent and accurate classification. Our approach involves an optimization-based strategy that actively penalizes the prediction of a large number of distinct classes within each window. We conducted experiments on histopathological data to classify tissue classes in digital slides of liver cancer, specifically hepatocellular carcinoma. The initial results show the effectiveness of our method and its potential to enhance the process of automated cancer diagnosis and treatment, all while reducing the time and effort required for expert annotation.

Viaarxiv icon

Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models

Oct 21, 2023
Pierre Colombo, Victor Pellegrain, Malik Boudiaf, Victor Storchan, Myriam Tami, Ismail Ben Ayed, Celine Hudelot, Pablo Piantanida

Figure 1 for Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
Figure 2 for Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
Figure 3 for Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models
Figure 4 for Transductive Learning for Textual Few-Shot Classification in API-based Embedding Models

Proprietary and closed APIs are becoming increasingly common to process natural language, and are impacting the practical applications of natural language processing, including few-shot classification. Few-shot classification involves training a model to perform a new classification task with a handful of labeled data. This paper presents three contributions. First, we introduce a scenario where the embedding of a pre-trained model is served through a gated API with compute-cost and data-privacy constraints. Second, we propose a transductive inference, a learning paradigm that has been overlooked by the NLP community. Transductive inference, unlike traditional inductive learning, leverages the statistics of unlabeled data. We also introduce a new parameter-free transductive regularizer based on the Fisher-Rao loss, which can be used on top of the gated API embeddings. This method fully utilizes unlabeled data, does not share any label with the third-party API provider and could serve as a baseline for future research. Third, we propose an improved experimental setting and compile a benchmark of eight datasets involving multiclass classification in four different languages, with up to 151 classes. We evaluate our methods using eight backbone models, along with an episodic evaluation over 1,000 episodes, which demonstrate the superiority of transductive inference over the standard inductive setting.

* EMNLP 2023 
Viaarxiv icon

ClusT3: Information Invariant Test-Time Training

Oct 18, 2023
Gustavo A. Vargas Hakim, David Osowiechi, Mehrdad Noori, Milad Cheraghalikhani, Ismail Ben Ayed, Christian Desrosiers

Deep Learning models have shown remarkable performance in a broad range of vision tasks. However, they are often vulnerable against domain shifts at test-time. Test-time training (TTT) methods have been developed in an attempt to mitigate these vulnerabilities, where a secondary task is solved at training time simultaneously with the main task, to be later used as an self-supervised proxy task at test-time. In this work, we propose a novel unsupervised TTT technique based on the maximization of Mutual Information between multi-scale feature maps and a discrete latent representation, which can be integrated to the standard training as an auxiliary clustering task. Experimental results demonstrate competitive classification performance on different popular test-time adaptation benchmarks.

Viaarxiv icon

Aggregated f-average Neural Network for Interpretable Ensembling

Oct 09, 2023
Mathieu Vu, Emilie Chouzenoux, Jean-Christophe Pesquet, Ismail Ben Ayed

Ensemble learning leverages multiple models (i.e., weak learners) on a common machine learning task to enhance prediction performance. Basic ensembling approaches average the weak learners outputs, while more sophisticated ones stack a machine learning model in between the weak learners outputs and the final prediction. This work fuses both aforementioned frameworks. We introduce an aggregated f-average (AFA) shallow neural network which models and combines different types of averages to perform an optimal aggregation of the weak learners predictions. We emphasise its interpretable architecture and simple training strategy, and illustrate its good performance on the problem of few-shot class incremental learning.

* 5 pages 
Viaarxiv icon

Bag of Tricks for Fully Test-Time Adaptation

Oct 03, 2023
Saypraseuth Mounsaveng, Florent Chiaroni, Malik Boudiaf, Marco Pedersoli, Ismail Ben Ayed

Fully Test-Time Adaptation (TTA), which aims at adapting models to data drifts, has recently attracted wide interest. Numerous tricks and techniques have been proposed to ensure robust learning on arbitrary streams of unlabeled data. However, assessing the true impact of each individual technique and obtaining a fair comparison still constitutes a significant challenge. To help consolidate the community's knowledge, we present a categorization of selected orthogonal TTA techniques, including small batch normalization, stream rebalancing, reliable sample selection, and network confidence calibration. We meticulously dissect the effect of each approach on different scenarios of interest. Through our analysis, we shed light on trade-offs induced by those techniques between accuracy, the computational power required, and model complexity. We also uncover the synergy that arises when combining techniques and are able to establish new state-of-the-art results.

Viaarxiv icon

A Foundation LAnguage-Image model of the Retina (FLAIR): Encoding expert knowledge in text supervision

Aug 15, 2023
Julio Silva-Rodriguez, Hadi Chakor, Riadh Kobbi, Jose Dolz, Ismail Ben Ayed

Figure 1 for A Foundation LAnguage-Image model of the Retina (FLAIR): Encoding expert knowledge in text supervision
Figure 2 for A Foundation LAnguage-Image model of the Retina (FLAIR): Encoding expert knowledge in text supervision
Figure 3 for A Foundation LAnguage-Image model of the Retina (FLAIR): Encoding expert knowledge in text supervision
Figure 4 for A Foundation LAnguage-Image model of the Retina (FLAIR): Encoding expert knowledge in text supervision

Foundation vision-language models are currently transforming computer vision, and are on the rise in medical imaging fueled by their very promising generalization capabilities. However, the initial attempts to transfer this new paradigm to medical imaging have shown less impressive performances than those observed in other domains, due to the significant domain shift and the complex, expert domain knowledge inherent to medical-imaging tasks. Motivated by the need for domain-expert foundation models, we present FLAIR, a pre-trained vision-language model for universal retinal fundus image understanding. To this end, we compiled 37 open-access, mostly categorical fundus imaging datasets from various sources, with up to 97 different target conditions and 284,660 images. We integrate the expert's domain knowledge in the form of descriptive textual prompts, during both pre-training and zero-shot inference, enhancing the less-informative categorical supervision of the data. Such a textual expert's knowledge, which we compiled from the relevant clinical literature and community standards, describes the fine-grained features of the pathologies as well as the hierarchies and dependencies between them. We report comprehensive evaluations, which illustrate the benefit of integrating expert knowledge and the strong generalization capabilities of FLAIR under difficult scenarios with domain shifts or unseen categories. When adapted with a lightweight linear probe, FLAIR outperforms fully-trained, dataset-focused models, more so in the few-shot regimes. Interestingly, FLAIR outperforms by a large margin more generalist, larger-scale image-language models, which emphasizes the potential of embedding experts' domain knowledge and the limitations of generalist models in medical imaging.

* The pre-trained model is available at: https://github.com/jusiro/FLAIR 
Viaarxiv icon

Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images

Jul 21, 2023
Saypraseuth Mounsaveng, Issam Laradji, David Vázquez, Marco Perdersoli, Ismail Ben Ayed

Figure 1 for Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images
Figure 2 for Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images
Figure 3 for Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images
Figure 4 for Automatic Data Augmentation Learning using Bilevel Optimization for Histopathological Images

Training a deep learning model to classify histopathological images is challenging, because of the color and shape variability of the cells and tissues, and the reduced amount of available data, which does not allow proper learning of those variations. Variations can come from the image acquisition process, for example, due to different cell staining protocols or tissue deformation. To tackle this challenge, Data Augmentation (DA) can be used during training to generate additional samples by applying transformations to existing ones, to help the model become invariant to those color and shape transformations. The problem with DA is that it is not only dataset-specific but it also requires domain knowledge, which is not always available. Without this knowledge, selecting the right transformations can only be done using heuristics or through a computationally demanding search. To address this, we propose an automatic DA learning method. In this method, the DA parameters, i.e. the transformation parameters needed to improve the model training, are considered learnable and are learned automatically using a bilevel optimization approach in a quick and efficient way using truncated backpropagation. We validated the method on six different datasets. Experimental results show that our model can learn color and affine transformations that are more helpful to train an image classifier than predefined DA transformations, which are also more expensive as they need to be selected before the training by grid search on a validation set. We also show that similarly to a model trained with RandAugment, our model has also only a few method-specific hyperparameters to tune but is performing better. This makes our model a good solution for learning the best DA parameters, especially in the context of histopathological images, where defining potentially useful transformation heuristically is not trivial.

* arXiv admin note: text overlap with arXiv:2006.14699 
Viaarxiv icon

Prompting classes: Exploring the Power of Prompt Class Learning in Weakly Supervised Semantic Segmentation

Jul 08, 2023
Balamurali Murugesan, Rukhshanda Hussain, Rajarshi Bhattacharya, Ismail Ben Ayed, Jose Dolz

Figure 1 for Prompting classes: Exploring the Power of Prompt Class Learning in Weakly Supervised Semantic Segmentation
Figure 2 for Prompting classes: Exploring the Power of Prompt Class Learning in Weakly Supervised Semantic Segmentation
Figure 3 for Prompting classes: Exploring the Power of Prompt Class Learning in Weakly Supervised Semantic Segmentation
Figure 4 for Prompting classes: Exploring the Power of Prompt Class Learning in Weakly Supervised Semantic Segmentation

Recently, CLIP-based approaches have exhibited remarkable performance on generalization and few-shot learning tasks, fueled by the power of contrastive language-vision pre-training. In particular, prompt tuning has emerged as an effective strategy to adapt the pre-trained language-vision models to downstream tasks by employing task-related textual tokens. Motivated by this progress, in this work we question whether other fundamental problems, such as weakly supervised semantic segmentation (WSSS), can benefit from prompt tuning. Our findings reveal two interesting observations that shed light on the impact of prompt tuning on WSSS. First, modifying only the class token of the text prompt results in a greater impact on the Class Activation Map (CAM), compared to arguably more complex strategies that optimize the context. And second, the class token associated with the image ground truth does not necessarily correspond to the category that yields the best CAM. Motivated by these observations, we introduce a novel approach based on a PrOmpt cLass lEarning (POLE) strategy. Through extensive experiments we demonstrate that our simple, yet efficient approach achieves SOTA performance in a well-known WSSS benchmark. These results highlight not only the benefits of language-vision models in WSSS but also the potential of prompt learning for this problem. The code is available at https://github.com/rB080/WSS_POLE.

* Under review 
Viaarxiv icon

Task Adaptive Feature Transformation for One-Shot Learning

Apr 13, 2023
Imtiaz Masud Ziko, Freddy Lecue, Ismail Ben Ayed

Figure 1 for Task Adaptive Feature Transformation for One-Shot Learning
Figure 2 for Task Adaptive Feature Transformation for One-Shot Learning
Figure 3 for Task Adaptive Feature Transformation for One-Shot Learning

We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks, improving significantly transductive entropy-based inference for low-shot regimes. Our norm-induced transformation could be understood as a re-parametrization of the feature space to disentangle the representations of different classes in a task specific manner. It focuses on the relevant feature dimensions while hindering the effects of non-relevant dimensions that may cause overfitting in a one-shot setting. We also provide an interpretation of our proposed feature transformation in the basic case of few-shot inference with K-means clustering. Furthermore, we give an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization. We report comprehensive experiments, which show consistent improvements over a variety of one-shot benchmarks, outperforming recent state-of-the-art methods.

Viaarxiv icon