Alert button
Picture for Paul Honeine

Paul Honeine

Alert button

SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving

Mar 09, 2022
Ahmed Rida Sekkat, Yohan Dupuis, Varun Ravi Kumar, Hazem Rashed, Senthil Yogamani, Pascal Vasseur, Paul Honeine

Figure 1 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
Figure 2 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
Figure 3 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving
Figure 4 for SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving

Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190{\deg} field of view cover the 360{\deg} around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts.

Viaarxiv icon

Effect of Prior-based Losses on Segmentation Performance: A Benchmark

Jan 12, 2022
Rosana El Jurdi, Caroline Petitjean, Veronika Cheplygina, Paul Honeine, Fahed Abdallah

Figure 1 for Effect of Prior-based Losses on Segmentation Performance: A Benchmark
Figure 2 for Effect of Prior-based Losses on Segmentation Performance: A Benchmark
Figure 3 for Effect of Prior-based Losses on Segmentation Performance: A Benchmark
Figure 4 for Effect of Prior-based Losses on Segmentation Performance: A Benchmark

Today, deep convolutional neural networks (CNNs) have demonstrated state-of-the-art performance for medical image segmentation, on various imaging modalities and tasks. Despite early success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To enforce anatomical plausibility, recent research studies have focused on incorporating prior knowledge such as object shape or boundary, as constraints in the loss function. Prior integrated could be low-level referring to reformulated representations extracted from the ground-truth segmentations, or high-level representing external medical information such as the organ's shape or size. Over the past few years, prior-based losses exhibited a rising interest in the research field since they allow integration of expert knowledge while still being architecture-agnostic. However, given the diversity of prior-based losses on different medical imaging challenges and tasks, it has become hard to identify what loss works best for which dataset. In this paper, we establish a benchmark of recent prior-based losses for medical image segmentation. The main objective is to provide intuition onto which losses to choose given a particular task or dataset. To this end, four low-level and high-level prior-based losses are selected. The considered losses are validated on 8 different datasets from a variety of medical image segmentation challenges including the Decathlon, the ISLES and the WMH challenge. Results show that whereas low-level prior-based losses can guarantee an increase in performance over the Dice loss baseline regardless of the dataset characteristics, high-level prior-based losses can increase anatomical plausibility as per data characteristics.

* To be submitted to SPIE: Journal of Medical Imaging 
Viaarxiv icon

Breaking the Limits of Message Passing Graph Neural Networks

Jun 08, 2021
Muhammet Balcilar, Pierre Héroux, Benoit Gaüzère, Pascal Vasseur, Sébastien Adam, Paul Honeine

Figure 1 for Breaking the Limits of Message Passing Graph Neural Networks
Figure 2 for Breaking the Limits of Message Passing Graph Neural Networks
Figure 3 for Breaking the Limits of Message Passing Graph Neural Networks
Figure 4 for Breaking the Limits of Message Passing Graph Neural Networks

Since the Message Passing (Graph) Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs, they have been widely implemented and still raise a lot of interest even though their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1-WL). In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test and experimentally as powerful as a 3-WL existing models, while remaining spatially localized. Moreover, by designing custom filter functions, outputs can have various frequency components that allow the convolution process to learn different relationships between a given input graph signal and its associated properties. So far, the best 3-WL equivalent graph neural networks have a computational complexity in $\mathcal{O}(n^3)$ with memory usage in $\mathcal{O}(n^2)$, consider non-local update mechanism and do not provide the spectral richness of output profile. The proposed method overcomes all these aforementioned problems and reaches state-of-the-art results in many downstream tasks.

* 18 pages, 6 figures 
Viaarxiv icon

High-level Prior-based Loss Functions for Medical Image Segmentation: A Survey

Nov 22, 2020
Rosana El Jurdi, Caroline Petitjean, Paul Honeine, Veronika Cheplygina, Fahed Abdallah

Figure 1 for High-level Prior-based Loss Functions for Medical Image Segmentation: A Survey
Figure 2 for High-level Prior-based Loss Functions for Medical Image Segmentation: A Survey
Figure 3 for High-level Prior-based Loss Functions for Medical Image Segmentation: A Survey
Figure 4 for High-level Prior-based Loss Functions for Medical Image Segmentation: A Survey

Today, deep convolutional neural networks (CNNs) have demonstrated state of the art performance for supervised medical image segmentation, across various imaging modalities and tasks. Despite early success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To mitigate this effect, recent research works have focused on incorporating spatial information or prior knowledge to enforce anatomically plausible segmentation. If the integration of prior knowledge in image segmentation is not a new topic in classical optimization approaches, it is today an increasing trend in CNN based image segmentation, as shown by the growing literature on the topic. In this survey, we focus on high level prior, embedded at the loss function level. We categorize the articles according to the nature of the prior: the object shape, size, topology, and the inter-regions constraints. We highlight strengths and limitations of current approaches, discuss the challenge related to the design and the integration of prior-based losses, and the optimization strategies, and draw future research directions.

Viaarxiv icon

Statistical learning for sensor localization in wireless networks

May 11, 2020
Daniel Alshamaa, Farah Chehade, Paul Honeine

Figure 1 for Statistical learning for sensor localization in wireless networks

Indoor localization has become an important issue for wireless sensor networks. This paper presents a zoning-based localization technique that uses WiFi signals and works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the sensor using an observation model based on statistical learning.

Viaarxiv icon

Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks

Mar 26, 2020
Muhammet Balcilar, Guillaume Renton, Pierre Heroux, Benoit Gauzere, Sebastien Adam, Paul Honeine

Figure 1 for Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks
Figure 2 for Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks
Figure 3 for Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks
Figure 4 for Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks

This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another. Our source codes are publicly available at: https://github.com/balcilar/Spectral-Designed-Graph-Convolutions

* 24 pages, 8figures, preprint 
Viaarxiv icon

Une véritable approche $\ell_0$ pour l'apprentissage de dictionnaire

Sep 12, 2017
Yuan Liu, Stéphane Canu, Paul Honeine, Su Ruan

Figure 1 for Une véritable approche $\ell_0$ pour l'apprentissage de dictionnaire

Sparse representation learning has recently gained a great success in signal and image processing, thanks to recent advances in dictionary learning. To this end, the $\ell_0$-norm is often used to control the sparsity level. Nevertheless, optimization problems based on the $\ell_0$-norm are non-convex and NP-hard. For these reasons, relaxation techniques have been attracting much attention of researchers, by priorly targeting approximation solutions (e.g. $\ell_1$-norm, pursuit strategies). On the contrary, this paper considers the exact $\ell_0$-norm optimization problem and proves that it can be solved effectively, despite of its complexity. The proposed method reformulates the problem as a Mixed-Integer Quadratic Program (MIQP) and gets the global optimal solution by applying existing optimization software. Because the main difficulty of this approach is its computational time, two techniques are introduced that improve the computational speed. Finally, our method is applied to image denoising which shows its feasibility and relevance compared to the state-of-the-art.

* in French 
Viaarxiv icon