Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

Self-Supervised U-Net for Segmenting Flat and Sessile Polyps

Oct 17, 2021
Debayan Bhattacharya, Christian Betz, Dennis Eggert, Alexander Schlaefer

Colorectal Cancer(CRC) poses a great risk to public health. It is the third most common cause of cancer in the US. Development of colorectal polyps is one of the earliest signs of cancer. Early detection and resection of polyps can greatly increase survival rate to 90%. Manual inspection can cause misdetections because polyps vary in color, shape, size and appearance. To this end, Computer-Aided Diagnosis systems(CADx) has been proposed that detect polyps by processing the colonoscopic videos. The system acts a secondary check to help clinicians reduce misdetections so that polyps may be resected before they transform to cancer. Polyps vary in color, shape, size, texture and appearance. As a result, the miss rate of polyps is between 6% and 27% despite the prominence of CADx solutions. Furthermore, sessile and flat polyps which have diameter less than 10 mm are more likely to be undetected. Convolutional Neural Networks(CNN) have shown promising results in polyp segmentation. However, all of these works have a supervised approach and are limited by the size of the dataset. It was observed that smaller datasets reduce the segmentation accuracy of ResUNet++. We train a U-Net to inpaint randomly dropped out pixels in the image as a proxy task. The dataset we use for pre-training is Kvasir-SEG dataset. This is followed by a supervised training on the limited Kvasir-Sessile dataset. Our experimental results demonstrate that with limited annotated dataset and a larger unlabeled dataset, self-supervised approach is a better alternative than fully supervised approach. Specifically, our self-supervised U-Net performs better than five segmentation models which were trained in supervised manner on the Kvasir-Sessile dataset.

  

Primary Tumor Origin Classification of Lung Nodules in Spectral CT using Transfer Learning

Jun 30, 2020
Linde S. Hesse, Pim A. de Jong, Josien P. W. Pluim, Veronika Cheplygina

Early detection of lung cancer has been proven to decrease mortality significantly. A recent development in computed tomography (CT), spectral CT, can potentially improve diagnostic accuracy, as it yields more information per scan than regular CT. However, the shear workload involved with analyzing a large number of scans drives the need for automated diagnosis methods. Therefore, we propose a detection and classification system for lung nodules in CT scans. Furthermore, we want to observe whether spectral images can increase classifier performance. For the detection of nodules we trained a VGG-like 3D convolutional neural net (CNN). To obtain a primary tumor classifier for our dataset we pre-trained a 3D CNN with similar architecture on nodule malignancies of a large publicly available dataset, the LIDC-IDRI dataset. Subsequently we used this pre-trained network as feature extractor for the nodules in our dataset. The resulting feature vectors were classified into two (benign/malignant) and three (benign/primary lung cancer/metastases) classes using support vector machine (SVM). This classification was performed both on nodule- and scan-level. We obtained state-of-the art performance for detection and malignancy regression on the LIDC-IDRI database. Classification performance on our own dataset was higher for scan- than for nodule-level predictions. For the three-class scan-level classification we obtained an accuracy of 78\%. Spectral features did increase classifier performance, but not significantly. Our work suggests that a pre-trained feature extractor can be used as primary tumor origin classifier for lung nodules, eliminating the need for elaborate fine-tuning of a new network and large datasets. Code is available at \url{https://github.com/tueimage/lung-nodule-msc-2018}.

* MSc thesis Linde Hesse 
  

Genetic Deep Learning for Lung Cancer Screening

Jul 27, 2019
Hunter Park, Connor Monahan

Convolutional neural networks (CNNs) have shown great promise in improving computer aided detection (CADe). From classifying tumors found via mammography as benign or malignant to automated detection of colorectal polyps in CT colonography, these advances have helped reduce the need for further evaluation with invasive testing and prevent errors from missed diagnoses by acting as a second observer in today's fast paced and high volume clinical environment. CADe methods have become faster and more precise thanks to innovations in deep learning over the past several years. With advancements such as the inception module and utilization of residual connections, the approach to designing CNN architectures has become an art. It is customary to use proven models and fine tune them for particular tasks given a dataset, often requiring tedious work. We investigated using a genetic algorithm (GA) to conduct a neural architectural search (NAS) to generate a novel CNN architecture to find early stage lung cancer in chest x-rays (CXR). Using a dataset of over twelve thousand biopsy proven cases of lung cancer, the trained classification model achieved an accuracy of 97.15% with a PPV of 99.88% and a NPV of 94.81%, beating models such as Inception-V3 and ResNet-152 while simultaneously reducing the number of parameters a factor of 4 and 14, respectively.

  

Dual Skip Connections Minimize the False Positive Rate of Lung Nodule Detection in CT images

Oct 25, 2021
Jiahua Xu, Philipp Ernst, Tung Lung Liu, Andreas Nürnberger

Pulmonary cancer is one of the most commonly diagnosed and fatal cancers and is often diagnosed by incidental findings on computed tomography. Automated pulmonary nodule detection is an essential part of computer-aided diagnosis, which is still facing great challenges and difficulties to quickly and accurately locate the exact nodules' positions. This paper proposes a dual skip connection upsampling strategy based on Dual Path network in a U-Net structure generating multiscale feature maps, which aims to minimize the ratio of false positives and maximize the sensitivity for lesion detection of nodules. The results show that our new upsampling strategy improves the performance by having 85.3% sensitivity at 4 FROC per image compared to 84.2% for the regular upsampling strategy or 81.2% for VGG16-based Faster-R-CNN.

* to be published at IEEE EMBC 2021, in IEEE Xplore 
  

Positive-unlabeled Learning for Cell Detection in Histopathology Images with Incomplete Annotations

Jun 30, 2021
Zipei Zhao, Fengqian Pang, Zhiwen Liu, Chuyang Ye

Cell detection in histopathology images is of great value in clinical practice. \textit{Convolutional neural networks} (CNNs) have been applied to cell detection to improve the detection accuracy, where cell annotations are required for network training. However, due to the variety and large number of cells, complete annotations that include every cell of interest in the training images can be challenging. Usually, incomplete annotations can be achieved, where positive labeling results are carefully examined to ensure their reliability but there can be other positive instances, i.e., cells of interest, that are not included in the annotations. This annotation strategy leads to a lack of knowledge about true negative samples. Most existing methods simply treat instances that are not labeled as positive as truly negative during network training, which can adversely affect the network performance. In this work, to address the problem of incomplete annotations, we formulate the training of detection networks as a positive-unlabeled learning problem. Specifically, the classification loss in network training is revised to take into account incomplete annotations, where the terms corresponding to negative samples are approximated with the true positive samples and the other samples of which the labels are unknown. To evaluate the proposed method, experiments were performed on a publicly available dataset for mitosis detection in breast cancer cells, and the experimental results show that our method improves the performance of cell detection given incomplete annotations for training.

* Accepted by MICCAI 2021 
  

TIAger: Tumor-Infiltrating Lymphocyte Scoring in Breast Cancer for the TiGER Challenge

Jun 23, 2022
Adam Shephard, Mostafa Jahanifar, Ruoyu Wang, Muhammad Dawood, Simon Graham, Kastytis Sidlauskas, Syed Ali Khurram, Nasir Rajpoot, Shan E Ahmed Raza

The quantification of tumor-infiltrating lymphocytes (TILs) has been shown to be an independent predictor for prognosis of breast cancer patients. Typically, pathologists give an estimate of the proportion of the stromal region that contains TILs to obtain a TILs score. The Tumor InfiltratinG lymphocytes in breast cancER (TiGER) challenge, aims to assess the prognostic significance of computer-generated TILs scores for predicting survival as part of a Cox proportional hazards model. For this challenge, as the TIAger team, we have developed an algorithm to first segment tumor vs. stroma, before localising the tumor bulk region for TILs detection. Finally, we use these outputs to generate a TILs score for each case. On preliminary testing, our approach achieved a tumor-stroma weighted Dice score of 0.791 and a FROC score of 0.572 for lymphocytic detection. For predicting survival, our model achieved a C-index of 0.719. These results achieved first place across the preliminary testing leaderboards of the TiGER challenge.

* TiGER Challenge entry 
  

REPLICA: Enhanced Feature Pyramid Network by Local Image Translation and Conjunct Attention for High-Resolution Breast Tumor Detection

Nov 22, 2021
Yifan Zhang, Haoyu Dong, Nicolas Konz, Hanxue Gu, Maciej A. Mazurowski

We introduce an improvement to the feature pyramid network of standard object detection models. We call our method enhanced featuRE Pyramid network by Local Image translation and Conjunct Attention, or REPLICA. REPLICA improves object detection performance by simultaneously (1) generating realistic but fake images with simulated objects to mitigate the data-hungry problem of the attention mechanism, and (2) advancing the detection model architecture through a novel modification of attention on image feature patches. Specifically, we use a convolutional autoencoder as a generator to create new images by injecting objects into images via local interpolation and reconstruction of their features extracted in hidden layers. Then due to the larger number of simulated images, we use a visual transformer to enhance outputs of each ResNet layer that serve as inputs to a feature pyramid network. We apply our methodology to the problem of detecting lesions in Digital Breast Tomosynthesis scans (DBT), a high-resolution medical imaging modality crucial in breast cancer screening. We demonstrate qualitatively and quantitatively that REPLICA can improve the accuracy of tumor detection using our enhanced standard object detection framework via experimental results.

  

EMT-NET: Efficient multitask network for computer-aided diagnosis of breast cancer

Jan 13, 2022
Jiaqiao Shi, Aleksandar Vakanski, Min Xian, Jianrui Ding, Chunping Ning

Deep learning-based computer-aided diagnosis has achieved unprecedented performance in breast cancer detection. However, most approaches are computationally intensive, which impedes their broader dissemination in real-world applications. In this work, we propose an efficient and light-weighted multitask learning architecture to classify and segment breast tumors simultaneously. We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions. Moreover, we propose a new numerically stable loss function that easily controls the balance between the sensitivity and specificity of cancer detection. The proposed approach is evaluated using a breast ultrasound dataset with 1,511 images. The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively. We validate the model using a virtual mobile device, and the average inference time is 0.35 seconds per image.

  

Automated Detection of Acute Leukemia using K-mean Clustering Algorithm

Mar 06, 2018
Sachin Kumar, Sumita Mishra, Pallavi Asthana, Pragya

Leukemia is a hematologic cancer which develops in blood tissue and triggers rapid production of immature and abnormal shaped white blood cells. Based on statistics it is found that the leukemia is one of the leading causes of death in men and women alike. Microscopic examination of blood sample or bone marrow smear is the most effective technique for diagnosis of leukemia. Pathologists analyze microscopic samples to make diagnostic assessments on the basis of characteristic cell features. Recently, computerized methods for cancer detection have been explored towards minimizing human intervention and providing accurate clinical information. This paper presents an algorithm for automated image based acute leukemia detection systems. The method implemented uses basic enhancement, morphology, filtering and segmenting technique to extract region of interest using k-means clustering algorithm. The proposed algorithm achieved an accuracy of 92.8% and is tested with Nearest Neighbor (KNN) and Naive Bayes Classifier on the data-set of 60 samples.

* Advances in Intelligent Systems and Computing, vol 554. Springer, 2018 
* Presented in ICCCCS 2016 
  

Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network

Nov 10, 2018
Md Zahangir Alom, Chris Yakopcic, Tarek M. Taha, Vijayan K. Asari

The Deep Convolutional Neural Network (DCNN) is one of the most powerful and successful deep learning approaches. DCNNs have already provided superior performance in different modalities of medical imaging including breast cancer classification, segmentation, and detection. Breast cancer is one of the most common and dangerous cancers impacting women worldwide. In this paper, we have proposed a method for breast cancer classification with the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model. The IRRCNN is a powerful DCNN model that combines the strength of the Inception Network (Inception-v4), the Residual Network (ResNet), and the Recurrent Convolutional Neural Network (RCNN). The IRRCNN shows superior performance against equivalent Inception Networks, Residual Networks, and RCNNs for object recognition tasks. In this paper, the IRRCNN approach is applied for breast cancer classification on two publicly available datasets including BreakHis and Breast Cancer Classification Challenge 2015. The experimental results are compared against the existing machine learning and deep learning-based approaches with respect to image-based, patch-based, image-level, and patient-level classification. The IRRCNN model provides superior classification performance in terms of sensitivity, Area Under the Curve (AUC), the ROC curve, and global accuracy compared to existing approaches for both datasets.

* 15 pages, 9 figures, 9 tables 
  
<<
17
18
19
20
21
22
23
24
25
26
27
28
29
>>