Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

Positive-unlabeled Learning for Cell Detection in Histopathology Images with Incomplete Annotations

Jun 30, 2021
Zipei Zhao, Fengqian Pang, Zhiwen Liu, Chuyang Ye

Cell detection in histopathology images is of great value in clinical practice. \textit{Convolutional neural networks} (CNNs) have been applied to cell detection to improve the detection accuracy, where cell annotations are required for network training. However, due to the variety and large number of cells, complete annotations that include every cell of interest in the training images can be challenging. Usually, incomplete annotations can be achieved, where positive labeling results are carefully examined to ensure their reliability but there can be other positive instances, i.e., cells of interest, that are not included in the annotations. This annotation strategy leads to a lack of knowledge about true negative samples. Most existing methods simply treat instances that are not labeled as positive as truly negative during network training, which can adversely affect the network performance. In this work, to address the problem of incomplete annotations, we formulate the training of detection networks as a positive-unlabeled learning problem. Specifically, the classification loss in network training is revised to take into account incomplete annotations, where the terms corresponding to negative samples are approximated with the true positive samples and the other samples of which the labels are unknown. To evaluate the proposed method, experiments were performed on a publicly available dataset for mitosis detection in breast cancer cells, and the experimental results show that our method improves the performance of cell detection given incomplete annotations for training.

* Accepted by MICCAI 2021 
Access Paper or Ask Questions

REPLICA: Enhanced Feature Pyramid Network by Local Image Translation and Conjunct Attention for High-Resolution Breast Tumor Detection

Nov 22, 2021
Yifan Zhang, Haoyu Dong, Nicolas Konz, Hanxue Gu, Maciej A. Mazurowski

We introduce an improvement to the feature pyramid network of standard object detection models. We call our method enhanced featuRE Pyramid network by Local Image translation and Conjunct Attention, or REPLICA. REPLICA improves object detection performance by simultaneously (1) generating realistic but fake images with simulated objects to mitigate the data-hungry problem of the attention mechanism, and (2) advancing the detection model architecture through a novel modification of attention on image feature patches. Specifically, we use a convolutional autoencoder as a generator to create new images by injecting objects into images via local interpolation and reconstruction of their features extracted in hidden layers. Then due to the larger number of simulated images, we use a visual transformer to enhance outputs of each ResNet layer that serve as inputs to a feature pyramid network. We apply our methodology to the problem of detecting lesions in Digital Breast Tomosynthesis scans (DBT), a high-resolution medical imaging modality crucial in breast cancer screening. We demonstrate qualitatively and quantitatively that REPLICA can improve the accuracy of tumor detection using our enhanced standard object detection framework via experimental results.

Access Paper or Ask Questions

EMT-NET: Efficient multitask network for computer-aided diagnosis of breast cancer

Jan 13, 2022
Jiaqiao Shi, Aleksandar Vakanski, Min Xian, Jianrui Ding, Chunping Ning

Deep learning-based computer-aided diagnosis has achieved unprecedented performance in breast cancer detection. However, most approaches are computationally intensive, which impedes their broader dissemination in real-world applications. In this work, we propose an efficient and light-weighted multitask learning architecture to classify and segment breast tumors simultaneously. We incorporate a segmentation task into a tumor classification network, which makes the backbone network learn representations focused on tumor regions. Moreover, we propose a new numerically stable loss function that easily controls the balance between the sensitivity and specificity of cancer detection. The proposed approach is evaluated using a breast ultrasound dataset with 1,511 images. The accuracy, sensitivity, and specificity of tumor classification is 88.6%, 94.1%, and 85.3%, respectively. We validate the model using a virtual mobile device, and the average inference time is 0.35 seconds per image.

Access Paper or Ask Questions

Automated Detection of Acute Leukemia using K-mean Clustering Algorithm

Mar 06, 2018
Sachin Kumar, Sumita Mishra, Pallavi Asthana, Pragya

Leukemia is a hematologic cancer which develops in blood tissue and triggers rapid production of immature and abnormal shaped white blood cells. Based on statistics it is found that the leukemia is one of the leading causes of death in men and women alike. Microscopic examination of blood sample or bone marrow smear is the most effective technique for diagnosis of leukemia. Pathologists analyze microscopic samples to make diagnostic assessments on the basis of characteristic cell features. Recently, computerized methods for cancer detection have been explored towards minimizing human intervention and providing accurate clinical information. This paper presents an algorithm for automated image based acute leukemia detection systems. The method implemented uses basic enhancement, morphology, filtering and segmenting technique to extract region of interest using k-means clustering algorithm. The proposed algorithm achieved an accuracy of 92.8% and is tested with Nearest Neighbor (KNN) and Naive Bayes Classifier on the data-set of 60 samples.

* Advances in Intelligent Systems and Computing, vol 554. Springer, 2018 
* Presented in ICCCCS 2016 
Access Paper or Ask Questions

Breast Cancer Classification from Histopathological Images with Inception Recurrent Residual Convolutional Neural Network

Nov 10, 2018
Md Zahangir Alom, Chris Yakopcic, Tarek M. Taha, Vijayan K. Asari

The Deep Convolutional Neural Network (DCNN) is one of the most powerful and successful deep learning approaches. DCNNs have already provided superior performance in different modalities of medical imaging including breast cancer classification, segmentation, and detection. Breast cancer is one of the most common and dangerous cancers impacting women worldwide. In this paper, we have proposed a method for breast cancer classification with the Inception Recurrent Residual Convolutional Neural Network (IRRCNN) model. The IRRCNN is a powerful DCNN model that combines the strength of the Inception Network (Inception-v4), the Residual Network (ResNet), and the Recurrent Convolutional Neural Network (RCNN). The IRRCNN shows superior performance against equivalent Inception Networks, Residual Networks, and RCNNs for object recognition tasks. In this paper, the IRRCNN approach is applied for breast cancer classification on two publicly available datasets including BreakHis and Breast Cancer Classification Challenge 2015. The experimental results are compared against the existing machine learning and deep learning-based approaches with respect to image-based, patch-based, image-level, and patient-level classification. The IRRCNN model provides superior classification performance in terms of sensitivity, Area Under the Curve (AUC), the ROC curve, and global accuracy compared to existing approaches for both datasets.

* 15 pages, 9 figures, 9 tables 
Access Paper or Ask Questions

Shape Detection In 2D Ultrasound Images

Nov 22, 2019
Ruturaj Gole, Haixia Wu, Subho Ghose

Ultrasound images are one of the most widely used techniques in clinical settings to analyze and detect different organs for study or diagnoses of diseases. The dependence on subjective opinions of experts such as radiologists calls for an automatic recognition and detection system that can provide an objective analysis. Previous work done on this topic is limited and can be classified by the organ of interest. Hybrid neural networks, linear and logistic regression models, 3D reconstructed models, and various machine learning techniques have been used to solve complex problems such as detection of lesions and cancer. Our project aims to use Dual Path Networks (DPN) to segment and detect shapes in ultrasound images taken from 3D printed models of the liver. Further the DPN deep architectures could be coupled with Fully Convolutional Network (FCN) to refine the results. Data denoised with various filters would be used to gauge how they fare against each other and provide the best results. Small amount of dataset works with DPNs, and hence, that should be appropriate for us as our dataset shall be limited in size. Moreover, the ultrasound scans shall need to be taken from different orientations of the scanner with respect to the organ, such that the training dataset can accurately perform segmentation and shape detection.

Access Paper or Ask Questions

3DFPN-HS$^2$: 3D Feature Pyramid Network Based High Sensitivity and Specificity Pulmonary Nodule Detection

Jun 11, 2019
Jingya Liu, Liangliang Cao, Oguz Akin, Yingli Tian

Accurate detection of pulmonary nodules with high sensitivity and specificity is essential for automatic lung cancer diagnosis from CT scans. Although many deep learning-based algorithms make great progress for improving the accuracy of nodule detection, the high false positive rate is still a challenging problem which limited the automatic diagnosis in routine clinical practice. In this paper, we propose a novel pulmonary nodule detection framework based on a 3D Feature Pyramid Network (3DFPN) to improve the sensitivity of nodule detection by employing multi-scale features to increase the resolution of nodules, as well as a parallel top-down path to transit the high-level semantic features to complement low-level general features. Furthermore, a High Sensitivity and Specificity (HS$^2$) network is introduced to eliminate the falsely detected nodule candidates by tracking the appearance changes in continuous CT slices of each nodule candidate. The proposed framework is evaluated on the public Lung Nodule Analysis (LUNA16) challenge dataset. Our method is able to accurately detect lung nodules at high sensitivity and specificity and achieves $90.4\%$ sensitivity with 1/8 false positive per scan which outperforms the state-of-the-art results $15.6\%$.

* 8 pages, 3 figures. Accepted to MICCAI 2019 
Access Paper or Ask Questions

A new correlation clustering method for cancer mutation analysis

Jan 25, 2016
Jack P. Hou, Amin Emad, Gregory J. Puleo, Jian Ma, Olgica Milenkovic

Cancer genomes exhibit a large number of different alterations that affect many genes in a diverse manner. It is widely believed that these alterations follow combinatorial patterns that have a strong connection with the underlying molecular interaction networks and functional pathways. A better understanding of the generative mechanisms behind the mutation rules and their influence on gene communities is of great importance for the process of driver mutations discovery and for identification of network modules related to cancer development and progression. We developed a new method for cancer mutation pattern analysis based on a constrained form of correlation clustering. Correlation clustering is an agnostic learning method that can be used for general community detection problems in which the number of communities or their structure is not known beforehand. The resulting algorithm, named $C^3$, leverages mutual exclusivity of mutations, patient coverage, and driver network concentration principles; it accepts as its input a user determined combination of heterogeneous patient data, such as that available from TCGA (including mutation, copy number, and gene expression information), and creates a large number of clusters containing mutually exclusive mutated genes in a particular type of cancer. The cluster sizes may be required to obey some useful soft size constraints, without impacting the computational complexity of the algorithm. To test $C^3$, we performed a detailed analysis on TCGA breast cancer and glioblastoma data and showed that our algorithm outperforms the state-of-the-art CoMEt method in terms of discovering mutually exclusive gene modules and identifying driver genes. Our $C^3$ method represents a unique tool for efficient and reliable identification of mutation patterns and driver pathways in large-scale cancer genomics studies.

* 22 pages, 4 figures 
Access Paper or Ask Questions

General DeepLCP model for disease prediction : Case of Lung Cancer

Sep 15, 2020
Mayssa Ben Kahla, Dalel Kanzari, Ahmed Maalel

According to GHO (Global Health Observatory (GHO), the high prevalence of a large variety of diseases such as Ischaemic heart disease, stroke, lung cancer disease and lower respiratory infections have remained the top killers during the past decade. The growth in the number of mortalities caused by these disease is due to the very delayed symptoms'detection. Since in the early stages, the symptoms are insignificant and similar to those of benign diseases (e.g. the flu ), and we can only detect the disease at an advanced stage. In addition, The high frequency of improper practices that are harmful to health, the hereditary factors, and the stressful living conditions can increase the death rates. Many researches dealt with these fatal disease, and most of them applied advantage machine learning models to deal with image diagnosis. However the drawback is that imagery permit only to detect disease at a very delayed stage and then patient can hardly be saved. In this Paper we present our new approach "DeepLCP" to predict fatal diseases that threaten people's lives. It's mainly based on raw and heterogeneous data of the concerned (or under-tested) person. "DeepLCP" results of a combination combination of the Natural Language Processing (NLP) and the deep learning paradigm.The experimental results of the proposed model in the case of Lung cancer prediction have approved high accuracy and a low loss data rate during the validation of the disease prediction.

Access Paper or Ask Questions

Lung Segmentation and Nodule Detection in Computed Tomography Scan using a Convolutional Neural Network Trained Adversarially using Turing Test Loss

Jun 16, 2020
Rakshith Sathish, Rachana Sathish, Ramanathan Sethuraman, Debdoot Sheet

Lung cancer is the most common form of cancer found worldwide with a high mortality rate. Early detection of pulmonary nodules by screening with a low-dose computed tomography (CT) scan is crucial for its effective clinical management. Nodules which are symptomatic of malignancy occupy about 0.0125 - 0.025\% of volume in a CT scan of a patient. Manual screening of all slices is a tedious task and presents a high risk of human errors. To tackle this problem we propose a computationally efficient two stage framework. In the first stage, a convolutional neural network (CNN) trained adversarially using Turing test loss segments the lung region. In the second stage, patches sampled from the segmented region are then classified to detect the presence of nodules. The proposed method is experimentally validated on the LUNA16 challenge dataset with a dice coefficient of $0.984\pm0.0007$ for 10-fold cross-validation.

* Accepted at 42nd Annual International Conferences of the IEEE Engineering in Medicine and Biology Society (2020) 
Access Paper or Ask Questions