Alert button
Picture for Hichem Sahli

Hichem Sahli

Alert button

The STOIC2021 COVID-19 AI challenge: applying reusable training methodologies to private data

Jun 25, 2023
Luuk H. Boulogne, Julian Lorenz, Daniel Kienzle, Robin Schon, Katja Ludwig, Rainer Lienhart, Simon Jegou, Guang Li, Cong Chen, Qi Wang, Derik Shi, Mayug Maniparambil, Dominik Muller, Silvan Mertes, Niklas Schroter, Fabio Hellmann, Miriam Elia, Ine Dirks, Matias Nicolas Bossa, Abel Diaz Berenguer, Tanmoy Mukherjee, Jef Vandemeulebroucke, Hichem Sahli, Nikos Deligiannis, Panagiotis Gonidakis, Ngoc Dung Huynh, Imran Razzak, Reda Bouadjenek, Mario Verdicchio, Pasquale Borrelli, Marco Aiello, James A. Meakin, Alexander Lemm, Christoph Russ, Razvan Ionasec, Nikos Paragios, Bram van Ginneken, Marie-Pierre Revel Dubois

Challenges drive the state-of-the-art of automated medical image analysis. The quantity of public training data that they provide can limit the performance of their solutions. Public access to the training methodology for these solutions remains absent. This study implements the Type Three (T3) challenge format, which allows for training solutions on private data and guarantees reusable training methodologies. With T3, challenge organizers train a codebase provided by the participants on sequestered training data. T3 was implemented in the STOIC2021 challenge, with the goal of predicting from a computed tomography (CT) scan whether subjects had a severe COVID-19 infection, defined as intubation or death within one month. STOIC2021 consisted of a Qualification phase, where participants developed challenge solutions using 2000 publicly available CT scans, and a Final phase, where participants submitted their training methodologies with which solutions were trained on CT scans of 9724 subjects. The organizers successfully trained six of the eight Final phase submissions. The submitted codebases for training and running inference were released publicly. The winning solution obtained an area under the receiver operating characteristic curve for discerning between severe and non-severe COVID-19 of 0.815. The Final phase solutions of all finalists improved upon their Qualification phase solutions.HSUXJM-TNZF9CHSUXJM-TNZF9C

Viaarxiv icon

Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning

Oct 09, 2022
Ali Safa, Tim Verbelen, Ilja Ocket, André Bourdoux, Hichem Sahli, Francky Catthoor, Georges Gielen

Figure 1 for Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning
Figure 2 for Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning
Figure 3 for Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning
Figure 4 for Fusing Event-based Camera and Radar for SLAM Using Spiking Neural Networks with Continual STDP Learning

This work proposes a first-of-its-kind SLAM architecture fusing an event-based camera and a Frequency Modulated Continuous Wave (FMCW) radar for drone navigation. Each sensor is processed by a bio-inspired Spiking Neural Network (SNN) with continual Spike-Timing-Dependent Plasticity (STDP) learning, as observed in the brain. In contrast to most learning-based SLAM systems%, which a) require the acquisition of a representative dataset of the environment in which navigation must be performed and b) require an off-line training phase, our method does not require any offline training phase, but rather the SNN continuously learns features from the input data on the fly via STDP. At the same time, the SNN outputs are used as feature descriptors for loop closure detection and map correction. We conduct numerous experiments to benchmark our system against state-of-the-art RGB methods and we demonstrate the robustness of our DVS-Radar SLAM approach under strong lighting variations.

Viaarxiv icon

Learning to SLAM on the Fly in Unknown Environments: A Continual Learning Approach for Drones in Visually Ambiguous Scenes

Aug 27, 2022
Ali Safa, Tim Verbelen, Ilja Ocket, André Bourdoux, Hichem Sahli, Francky Catthoor, Georges Gielen

Figure 1 for Learning to SLAM on the Fly in Unknown Environments: A Continual Learning Approach for Drones in Visually Ambiguous Scenes
Figure 2 for Learning to SLAM on the Fly in Unknown Environments: A Continual Learning Approach for Drones in Visually Ambiguous Scenes
Figure 3 for Learning to SLAM on the Fly in Unknown Environments: A Continual Learning Approach for Drones in Visually Ambiguous Scenes
Figure 4 for Learning to SLAM on the Fly in Unknown Environments: A Continual Learning Approach for Drones in Visually Ambiguous Scenes

Learning to safely navigate in unknown environments is an important task for autonomous drones used in surveillance and rescue operations. In recent years, a number of learning-based Simultaneous Localisation and Mapping (SLAM) systems relying on deep neural networks (DNNs) have been proposed for applications where conventional feature descriptors do not perform well. However, such learning-based SLAM systems rely on DNN feature encoders trained offline in typical deep learning settings. This makes them less suited for drones deployed in environments unseen during training, where continual adaptation is paramount. In this paper, we present a new method for learning to SLAM on the fly in unknown environments, by modulating a low-complexity Dictionary Learning and Sparse Coding (DLSC) pipeline with a newly proposed Quadratic Bayesian Surprise (QBS) factor. We experimentally validate our approach with data collected by a drone in a challenging warehouse scenario, where the high number of ambiguous scenes makes visual disambiguation hard.

Viaarxiv icon

Representation Learning with Information Theory for COVID-19 Detection

Jul 04, 2022
Abel Díaz Berenguer, Tanmoy Mukherjee, Matias Bossa, Nikos Deligiannis, Hichem Sahli

Figure 1 for Representation Learning with Information Theory for COVID-19 Detection
Figure 2 for Representation Learning with Information Theory for COVID-19 Detection
Figure 3 for Representation Learning with Information Theory for COVID-19 Detection

Successful data representation is a fundamental factor in machine learning based medical imaging analysis. Deep Learning (DL) has taken an essential role in robust representation learning. However, the inability of deep models to generalize to unseen data can quickly overfit intricate patterns. Thereby, we can conveniently implement strategies to aid deep models in discovering useful priors from data to learn their intrinsic properties. Our model, which we call a dual role network (DRN), uses a dependency maximization approach based on Least Squared Mutual Information (LSMI). The LSMI leverages dependency measures to ensure representation invariance and local smoothness. While prior works have used information theory measures like mutual information, known to be computationally expensive due to a density estimation step, our LSMI formulation alleviates the issues of intractable mutual information estimation and can be used to approximate it. Experiments on CT based COVID-19 Detection and COVID-19 Severity Detection benchmarks demonstrate the effectiveness of our method.

Viaarxiv icon

Continuously Learning to Detect People on the Fly: A Bio-inspired Visual System for Drones

Feb 20, 2022
Ali Safa, Ilja Ocket, André Bourdoux, Hichem Sahli, Francky Catthoor, Georges Gielen

Figure 1 for Continuously Learning to Detect People on the Fly: A Bio-inspired Visual System for Drones
Figure 2 for Continuously Learning to Detect People on the Fly: A Bio-inspired Visual System for Drones
Figure 3 for Continuously Learning to Detect People on the Fly: A Bio-inspired Visual System for Drones
Figure 4 for Continuously Learning to Detect People on the Fly: A Bio-inspired Visual System for Drones

This paper demonstrates for the first time that a biologically-plausible spiking neural network (SNN) equipped with Spike-Timing-Dependent Plasticity (STDP) can continuously learn to detect walking people on the fly using retina-inspired, event-based cameras. Our pipeline works as follows. First, a short sequence of event data ($<2$ minutes), capturing a walking human by a flying drone, is forwarded to a convolutional SNNSTDP system which also receives teacher spiking signals from a readout (forming a semi-supervised system). Then, STDP adaptation is stopped and the learned system is assessed on testing sequences. We conduct several experiments to study the effect of key parameters in our system and to compare it against conventionally-trained CNNs. We show that our system reaches a higher peak $F_1$ score (+19%) compared to CNNs with event-based camera frames, while enabling on-line adaptation.

Viaarxiv icon

Learning to Detect People on the Fly: A Bio-inspired Event-based Visual System for Drones

Feb 16, 2022
Ali Safa, Ilja Ocket, André Bourdoux, Hichem Sahli, Francky Catthoor, Georges Gielen

Figure 1 for Learning to Detect People on the Fly: A Bio-inspired Event-based Visual System for Drones
Figure 2 for Learning to Detect People on the Fly: A Bio-inspired Event-based Visual System for Drones
Figure 3 for Learning to Detect People on the Fly: A Bio-inspired Event-based Visual System for Drones
Figure 4 for Learning to Detect People on the Fly: A Bio-inspired Event-based Visual System for Drones

We demonstrate for the first time that a biologicallyplausible spiking neural network (SNN) equipped with Spike- Timing-Dependent Plasticity (STDP) learning can continuously learn to detect walking people on the fly using retina-inspired, event-based camera data. Our pipeline works as follows. First, a short sequence of event data (< 2 minutes), capturing a walking human from a flying drone, is shown to a convolutional SNNSTDP system which also receives teacher spiking signals from a convolutional readout (forming a semi-supervised system). Then, STDP adaptation is stopped and the learned system is assessed on testing sequences. We conduct several experiments to study the effect of key mechanisms in our system and we compare our precision-recall performance to conventionally-trained CNNs working with either RGB or event-based camera frames.

Viaarxiv icon

Learning Event-based Spatio-Temporal Feature Descriptors via Local Synaptic Plasticity: A Biologically-realistic Perspective of Computer Vision

Nov 04, 2021
Ali Safa, Hichem Sahli, André Bourdoux, Ilja Ocket, Francky Catthoor, Georges Gielen

Figure 1 for Learning Event-based Spatio-Temporal Feature Descriptors via Local Synaptic Plasticity: A Biologically-realistic Perspective of Computer Vision
Figure 2 for Learning Event-based Spatio-Temporal Feature Descriptors via Local Synaptic Plasticity: A Biologically-realistic Perspective of Computer Vision
Figure 3 for Learning Event-based Spatio-Temporal Feature Descriptors via Local Synaptic Plasticity: A Biologically-realistic Perspective of Computer Vision
Figure 4 for Learning Event-based Spatio-Temporal Feature Descriptors via Local Synaptic Plasticity: A Biologically-realistic Perspective of Computer Vision

We present an optimization-based theory describing spiking cortical ensembles equipped with Spike-Timing-Dependent Plasticity (STDP) learning, as empirically observed in the visual cortex. Using our methods, we build a class of fully-connected, convolutional and action-based feature descriptors for event-based camera that we respectively assess on N-MNIST, challenging CIFAR10-DVS and on the IBM DVS128 gesture dataset. We report significant accuracy improvements compared to conventional state-of-the-art event-based feature descriptors (+8% on CIFAR10-DVS). We report large improvements in accuracy compared to state-of-the-art STDP-based systems (+10% on N-MNIST, +7.74% on IBM DVS128 Gesture). In addition to ultra-low-power learning in neuromorphic edge devices, our work helps paving the way towards a biologically-realistic, optimization-based theory of cortical vision.

Viaarxiv icon

Explainable-by-design Semi-Supervised Representation Learning for COVID-19 Diagnosis from CT Imaging

Dec 02, 2020
Abel Díaz Berenguer, Hichem Sahli, Boris Joukovsky, Maryna Kvasnytsia, Ine Dirks, Mitchel Alioscha-Perez, Nikos Deligiannis, Panagiotis Gonidakis, Sebastián Amador Sánchez, Redona Brahimetaj, Evgenia Papavasileiou, Jonathan Cheung-Wai Chana, Fei Li, Shangzhen Song, Yixin Yang, Sofie Tilborghs, Siri Willems, Tom Eelbode, Jeroen Bertels, Dirk Vandermeulen, Frederik Maes, Paul Suetens, Lucas Fidon, Tom Vercauteren, David Robben, Arne Brys, Dirk Smeets, Bart Ilsen, Nico Buls, Nina Watté, Johan de Mey, Annemiek Snoeckx, Paul M. Parizel, Julien Guiot, Louis Deprez, Paul Meunier, Stefaan Gryspeerdt, Kristof De Smet, Bart Jansen, Jef Vandemeulebroucke

Figure 1 for Explainable-by-design Semi-Supervised Representation Learning for COVID-19 Diagnosis from CT Imaging
Figure 2 for Explainable-by-design Semi-Supervised Representation Learning for COVID-19 Diagnosis from CT Imaging
Figure 3 for Explainable-by-design Semi-Supervised Representation Learning for COVID-19 Diagnosis from CT Imaging
Figure 4 for Explainable-by-design Semi-Supervised Representation Learning for COVID-19 Diagnosis from CT Imaging

Our motivating application is a real-world problem: COVID-19 classification from CT imaging, for which we present an explainable Deep Learning approach based on a semi-supervised classification pipeline that employs variational autoencoders to extract efficient feature embedding. We have optimized the architecture of two different networks for CT images: (i) a novel conditional variational autoencoder (CVAE) with a specific architecture that integrates the class labels inside the encoder layers and uses side information with shared attention layers for the encoder, which make the most of the contextual clues for representation learning, and (ii) a downstream convolutional neural network for supervised classification using the encoder structure of the CVAE. With the explainable classification results, the proposed diagnosis system is very effective for COVID-19 classification. Based on the promising results obtained qualitatively and quantitatively, we envisage a wide deployment of our developed technique in large-scale clinical studies.Code is available at https://git.etrovub.be/AVSP/ct-based-covid-19-diagnostic-tool.git.

Viaarxiv icon