Alert button
Picture for Shadab Khan

Shadab Khan

Alert button

A machine learning-based method for estimating the number and orientations of major fascicles in diffusion-weighted magnetic resonance imaging

Jun 19, 2020
Davood Karimi, Lana Vasung, Camilo Jaimes, Fedel Machado-Rivas, Shadab Khan, Simon K. Warfield, Ali Gholipour

Figure 1 for A machine learning-based method for estimating the number and orientations of major fascicles in diffusion-weighted magnetic resonance imaging
Figure 2 for A machine learning-based method for estimating the number and orientations of major fascicles in diffusion-weighted magnetic resonance imaging
Figure 3 for A machine learning-based method for estimating the number and orientations of major fascicles in diffusion-weighted magnetic resonance imaging
Figure 4 for A machine learning-based method for estimating the number and orientations of major fascicles in diffusion-weighted magnetic resonance imaging

Multi-compartment modeling of diffusion-weighted magnetic resonance imaging measurements is necessary for accurate brain connectivity analysis. Existing methods for estimating the number and orientations of fascicles in an imaging voxel either depend on non-convex optimization techniques that are sensitive to initialization and measurement noise, or are prone to predicting spurious fascicles. In this paper, we propose a machine learning-based technique that can accurately estimate the number and orientations of fascicles in a voxel. Our method can be trained with either simulated or real diffusion-weighted imaging data. Our method estimates the angle to the closest fascicle for each direction in a set of discrete directions uniformly spread on the unit sphere. This information is then processed to extract the number and orientations of fascicles in a voxel. On realistic simulated phantom data with known ground truth, our method predicts the number and orientations of crossing fascicles more accurately than several existing methods. It also leads to more accurate tractography. On real data, our method is better than or compares favorably with standard methods in terms of robustness to measurement down-sampling and also in terms of expert quality assessment of tractography results.

Viaarxiv icon

FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points

Apr 04, 2020
Ahmed H. Shahin, Prateek Munjal, Ling Shao, Shadab Khan

Figure 1 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points
Figure 2 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points
Figure 3 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points
Figure 4 for FAIRS -- Soft Focus Generator and Attention for Robust Object Segmentation from Extreme Points

Semantic segmentation from user inputs has been actively studied to facilitate interactive segmentation for data annotation and other applications. Recent studies have shown that extreme points can be effectively used to encode user inputs. A heat map generated from the extreme points can be appended to the RGB image and input to the model for training. In this study, we present FAIRS -- a new approach to generate object segmentation from user inputs in the form of extreme points and corrective clicks. We propose a novel approach for effectively encoding the user input from extreme points and corrective clicks, in a novel and scalable manner that allows the network to work with a variable number of clicks, including corrective clicks for output refinement. We also integrate a dual attention module with our approach to increase the efficacy of the model in preferentially attending to the objects. We demonstrate that these additions help achieve significant improvements over state-of-the-art in dense object segmentation from user inputs, on multiple large-scale datasets. Through experiments, we demonstrate our method's ability to generate high-quality training data as well as its scalability in incorporating extreme points, guiding clicks, and corrective clicks in a principled manner.

Viaarxiv icon

Towards Robust and Reproducible Active Learning Using Neural Networks

Feb 21, 2020
Prateek Munjal, Nasir Hayat, Munawar Hayat, Jamshid Sourati, Shadab Khan

Figure 1 for Towards Robust and Reproducible Active Learning Using Neural Networks
Figure 2 for Towards Robust and Reproducible Active Learning Using Neural Networks
Figure 3 for Towards Robust and Reproducible Active Learning Using Neural Networks
Figure 4 for Towards Robust and Reproducible Active Learning Using Neural Networks

Active learning (AL) is a promising ML paradigm that has the potential to parse through large unlabeled data and help reduce annotation cost in domains where labeling entire data can be prohibitive. Recently proposed neural network based AL methods use different heuristics to accomplish this goal. In this study, we show that recent AL methods offer a gain over random baseline under a brittle combination of experimental conditions. We demonstrate that such marginal gains vanish when experimental factors are changed, leading to reproducibility issues and suggesting that AL methods lack robustness. We also observe that with a properly tuned model, which employs recently proposed regularization techniques, the performance significantly improves for all AL methods including the random sampling baseline, and performance differences among the AL methods become negligible. Based on these observations, we suggest a set of experiments that are critical to assess the true effectiveness of an AL method. To facilitate these experiments we also present an open source toolkit. We believe our findings and recommendations will help advance reproducible research in robust AL using neural networks.

Viaarxiv icon

Extreme Points Derived Confidence Map as a Cue For Class-Agnostic Segmentation Using Deep Neural Network

Jun 06, 2019
Shadab Khan, Ahmed H. Shahin, Javier Villafruela, Jianbing Shen, Ling Shao

Figure 1 for Extreme Points Derived Confidence Map as a Cue For Class-Agnostic Segmentation Using Deep Neural Network
Figure 2 for Extreme Points Derived Confidence Map as a Cue For Class-Agnostic Segmentation Using Deep Neural Network
Figure 3 for Extreme Points Derived Confidence Map as a Cue For Class-Agnostic Segmentation Using Deep Neural Network

To automate the process of segmenting an anatomy of interest, we can learn a model from previously annotated data. The learning-based approach uses annotations to train a model that tries to emulate the expert labeling on a new data set. While tremendous progress has been made using such approaches, labeling of medical images remains a time-consuming and expensive task. In this paper, we evaluate the utility of extreme points in learning to segment. Specifically, we propose a novel approach to compute a confidence map from extreme points that quantitatively encodes the priors derived from extreme points. We use the confidence map as a cue to train a deep neural network based on ResNet-101 and PSP module to develop a class-agnostic segmentation model that outperforms state-of-the-art method that employs extreme points as a cue. Further, we evaluate a realistic use-case by using our model to generate training data for supervised learning (U-Net) and observed that U-Net performs comparably when trained with either the generated data or the ground truth data. These findings suggest that models trained using cues can be used to generate reliable training data.

Viaarxiv icon

Real-time Deep Pose Estimation with Geodesic Loss for Image-to-Template Rigid Registration

Aug 18, 2018
Seyed Sadegh Mohseni Salehi, Shadab Khan, Deniz Erdogmus, Ali Gholipour

Figure 1 for Real-time Deep Pose Estimation with Geodesic Loss for Image-to-Template Rigid Registration
Figure 2 for Real-time Deep Pose Estimation with Geodesic Loss for Image-to-Template Rigid Registration
Figure 3 for Real-time Deep Pose Estimation with Geodesic Loss for Image-to-Template Rigid Registration
Figure 4 for Real-time Deep Pose Estimation with Geodesic Loss for Image-to-Template Rigid Registration

With an aim to increase the capture range and accelerate the performance of state-of-the-art inter-subject and subject-to-template 3D registration, we propose deep learning-based methods that are trained to find the 3D position of arbitrarily oriented subjects or anatomy based on slices or volumes of medical images. For this, we propose regression CNNs that learn to predict the angle-axis representation of 3D rotations and translations using image features. We use and compare mean square error and geodesic loss to train regression CNNs for 3D pose estimation used in two different scenarios: slice-to-volume registration and volume-to-volume registration. Our results show that in such registration applications that are amendable to learning, the proposed deep learning methods with geodesic loss minimization can achieve accurate results with a wide capture range in real-time (<100ms). We also tested the generalization capability of the trained CNNs on an expanded age range and on images of newborn subjects with similar and different MR image contrasts. We trained our models on T2-weighted fetal brain MRI scans and used them to predict the 3D pose of newborn brains based on T1-weighted MRI scans. We showed that the trained models generalized well for the new domain when we performed image contrast transfer through a conditional generative adversarial network. This indicates that the domain of application of the trained deep regression CNNs can be further expanded to image modalities and contrasts other than those used in training. A combination of our proposed methods with accelerated optimization-based registration algorithms can dramatically enhance the performance of automatic imaging devices and image processing methods of the future.

* This work has been submitted to TMI 
Viaarxiv icon