Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

Accurate Pulmonary Nodule Detection in Computed Tomography Images Using Deep Convolutional Neural Networks

Aug 29, 2017
Jia Ding, Aoxue Li, Zhiqiang Hu, Liwei Wang

Early detection of pulmonary cancer is the most promising way to enhance a patient's chance for survival. Accurate pulmonary nodule detection in computed tomography (CT) images is a crucial step in diagnosing pulmonary cancer. In this paper, inspired by the successful use of deep convolutional neural networks (DCNNs) in natural image recognition, we propose a novel pulmonary nodule detection approach based on DCNNs. We first introduce a deconvolutional structure to Faster Region-based Convolutional Neural Network (Faster R-CNN) for candidate detection on axial slices. Then, a three-dimensional DCNN is presented for the subsequent false positive reduction. Experimental results of the LUng Nodule Analysis 2016 (LUNA16) Challenge demonstrate the superior detection performance of the proposed approach on nodule detection(average FROC-score of 0.891, ranking the 1st place over all submitted results).

* MICCAI 2017 accepted 
  
Access Paper or Ask Questions

COTR: Convolution in Transformer Network for End to End Polyp Detection

May 23, 2021
Zhiqiang Shen, Chaonan Lin, Shaohua Zheng

Purpose: Colorectal cancer (CRC) is the second most common cause of cancer mortality worldwide. Colonoscopy is a widely used technique for colon screening and polyp lesions diagnosis. Nevertheless, manual screening using colonoscopy suffers from a substantial miss rate of polyps and is an overwhelming burden for endoscopists. Computer-aided diagnosis (CAD) for polyp detection has the potential to reduce human error and human burden. However, current polyp detection methods based on object detection framework need many handcrafted pre-processing and post-processing operations or user guidance that require domain-specific knowledge. Methods: In this paper, we propose a convolution in transformer (COTR) network for end-to-end polyp detection. Motivated by the detection transformer (DETR), COTR is constituted by a CNN for feature extraction, transformer encoder layers interleaved with convolutional layers for feature encoding and recalibration, transformer decoder layers for object querying, and a feed-forward network for detection prediction. Considering the slow convergence of DETR, COTR embeds convolution layers into transformer encoder for feature reconstruction and convergence acceleration. Results: Experimental results on two public polyp datasets show that COTR achieved 91.49\% precision, 82.69% sensitivity, and 86.87% F1-score on the ETIS-LARIB, and 91.67% precision, 93.54% sensitivity, and 92.60% F1-score on the CVC-ColonDB. Conclusion: This study proposed an end to end detection method based on detection transformer for colorectal polyp detection. Experimental results on ETIS-LARIB and CVC-ColonDB dataset demonstrated that the proposed model achieved comparable performance against state-of-the-art methods.

  
Access Paper or Ask Questions

A Comprehensive Evaluation of Machine Learning Techniques for Cancer Class Prediction Based on Microarray Data

Jul 26, 2013
Khalid Raza, Atif N Hasan

Prostate cancer is among the most common cancer in males and its heterogeneity is well known. Its early detection helps making therapeutic decision. There is no standard technique or procedure yet which is full-proof in predicting cancer class. The genomic level changes can be detected in gene expression data and those changes may serve as standard model for any random cancer data for class prediction. Various techniques were implied on prostate cancer data set in order to accurately predict cancer class including machine learning techniques. Huge number of attributes and few number of sample in microarray data leads to poor machine learning, therefore the most challenging part is attribute reduction or non significant gene reduction. In this work we have compared several machine learning techniques for their accuracy in predicting the cancer class. Machine learning is effective when number of attributes (genes) are larger than the number of samples which is rarely possible with gene expression data. Attribute reduction or gene filtering is absolutely required in order to make the data more meaningful as most of the genes do not participate in tumor development and are irrelevant for cancer prediction. Here we have applied combination of statistical techniques such as inter-quartile range and t-test, which has been effective in filtering significant genes and minimizing noise from data. Further we have done a comprehensive evaluation of ten state-of-the-art machine learning techniques for their accuracy in class prediction of prostate cancer. Out of these techniques, Bayes Network out performed with an accuracy of 94.11% followed by Navie Bayes with an accuracy of 91.17%. To cross validate our results, we modified our training dataset in six different way and found that average sensitivity, specificity, precision and accuracy of Bayes Network is highest among all other techniques used.

* 8 pages, 3 figures and 7 tables 
  
Access Paper or Ask Questions

Domain generalization in deep learning-based mass detection in mammography: A large-scale multi-center study

Jan 27, 2022
Lidia Garrucho, Kaisar Kushibar, Socayna Jouide, Oliver Diaz, Laura Igual, Karim Lekadir

Computer-aided detection systems based on deep learning have shown great potential in breast cancer detection. However, the lack of domain generalization of artificial neural networks is an important obstacle to their deployment in changing clinical environments. In this work, we explore the domain generalization of deep learning methods for mass detection in digital mammography and analyze in-depth the sources of domain shift in a large-scale multi-center setting. To this end, we compare the performance of eight state-of-the-art detection methods, including Transformer-based models, trained in a single domain and tested in five unseen domains. Moreover, a single-source mass detection training pipeline is designed to improve the domain generalization without requiring images from the new domain. The results show that our workflow generalizes better than state-of-the-art transfer learning-based approaches in four out of five domains while reducing the domain shift caused by the different acquisition protocols and scanner manufacturers. Subsequently, an extensive analysis is performed to identify the covariate shifts with bigger effects on the detection performance, such as due to differences in patient age, breast density, mass size, and mass malignancy. Ultimately, this comprehensive study provides key insights and best practices for future research on domain generalization in deep learning-based breast cancer detection.

  
Access Paper or Ask Questions

Automatic Generation of Interpretable Lung Cancer Scoring Models from Chest X-Ray Images

Dec 17, 2020
Michael J. Horry, Subrata Chakraborty, Biswajeet Pradhan, Manoranjan Paul, Douglas P. S. Gomes, Anwaar Ul-Haq

Lung cancer is the leading cause of cancer death worldwide with early detection being the key to a positive patient prognosis. Although a multitude of studies have demonstrated that machine learning, and particularly deep learning, techniques are effective at automatically diagnosing lung cancer, these techniques have yet to be clinically approved and adopted by the medical community. Most research in this field is focused on the narrow task of nodule detection to provide an artificial radiological second reading. We instead focus on extracting, from chest X-ray images, a wider range of pathologies associated with lung cancer using a computer vision model trained on a large dataset. We then find the set of best fit decision trees against an independent, smaller dataset for which lung cancer malignancy metadata is provided. For this small inferencing dataset, our best model achieves sensitivity and specificity of 85% and 75% respectively with a positive predictive value of 85% which is comparable to the performance of human radiologists. Furthermore, the decision trees created by this method may be considered as a starting point for refinement by medical experts into clinically usable multi-variate lung cancer scoring and diagnostic models.

* 10 pages, 14 figures, 6 tables 
  
Access Paper or Ask Questions

Semi-supervised multi-task learning for lung cancer diagnosis

May 04, 2018
Naji Khosravan, Ulas Bagci

Early detection of lung nodules is of great importance in lung cancer screening. Existing research recognizes the critical role played by CAD systems in early detection and diagnosis of lung nodules. However, many CAD systems, which are used as cancer detection tools, produce a lot of false positives (FP) and require a further FP reduction step. Furthermore, guidelines for early diagnosis and treatment of lung cancer are consist of different shape and volume measurements of abnormalities. Segmentation is at the heart of our understanding of nodules morphology making it a major area of interest within the field of computer aided diagnosis systems. This study set out to test the hypothesis that joint learning of false positive (FP) nodule reduction and nodule segmentation can improve the computer aided diagnosis (CAD) systems' performance on both tasks. To support this hypothesis we propose a 3D deep multi-task CNN to tackle these two problems jointly. We tested our system on LUNA16 dataset and achieved an average dice similarity coefficient (DSC) of 91% as segmentation accuracy and a score of nearly 92% for FP reduction. As a proof of our hypothesis, we showed improvements of segmentation and FP reduction tasks over two baselines. Our results support that joint training of these two tasks through a multi-task learning approach improves system performance on both. We also showed that a semi-supervised approach can be used to overcome the limitation of lack of labeled data for the 3D segmentation task.

* Accepted for publication at IEEE EMBC (40th International Engineering in Medicine and Biology Conference) 
  
Access Paper or Ask Questions

Two-Stage Convolutional Neural Network Architecture for Lung Nodule Detection

May 09, 2019
Haichao Cao, Hong Liu, Enmin Song, Guangzhi Ma, Xiangyang Xu, Renchao Jin, Tengying Liu, Chih-Cheng Hung

Early detection of lung cancer is an effective way to improve the survival rate of patients. It is a critical step to have accurate detection of lung nodules in computed tomography (CT) images for the diagnosis of lung cancer. However, due to the heterogeneity of the lung nodules and the complexity of the surrounding environment, robust nodule detection has been a challenging task. In this study, we propose a two-stage convolutional neural network (TSCNN) architecture for lung nodule detection. The CNN architecture in the first stage is based on the improved UNet segmentation network to establish an initial detection of lung nodules. Simultaneously, in order to obtain a high recall rate without introducing excessive false positive nodules, we propose a novel sampling strategy, and use the offline hard mining idea for training and prediction according to the proposed cascaded prediction method. The CNN architecture in the second stage is based on the proposed dual pooling structure, which is built into three 3D CNN classification networks for false positive reduction. Since the network training requires a significant amount of training data, we adopt a data augmentation method based on random mask. Furthermore, we have improved the generalization ability of the false positive reduction model by means of ensemble learning. The proposed method has been experimentally verified on the LUNA dataset. Experimental results show that the proposed TSCNN architecture can obtain competitive detection performance.

* 29 pages, 10 figures 
  
Access Paper or Ask Questions

Detection and classification of masses in mammographic images in a multi-kernel approach

Dec 20, 2017
Sidney Marlon Lopes de Lima, Abel Guilhermino da Silva Filho, Wellington Pinheiro dos Santos

According to the World Health Organization, breast cancer is the main cause of cancer death among adult women in the world. Although breast cancer occurs indiscriminately in countries with several degrees of social and economic development, among developing and underdevelopment countries mortality rates are still high, due to low availability of early detection technologies. From the clinical point of view, mammography is still the most effective diagnostic technology, given the wide diffusion of the use and interpretation of these images. Herein this work we propose a method to detect and classify mammographic lesions using the regions of interest of images. Our proposal consists in decomposing each image using multi-resolution wavelets. Zernike moments are extracted from each wavelet component. Using this approach we can combine both texture and shape features, which can be applied both to the detection and classification of mammary lesions. We used 355 images of fatty breast tissue of IRMA database, with 233 normal instances (no lesion), 72 benign, and 83 malignant cases. Classification was performed by using SVM and ELM networks with modified kernels, in order to optimize accuracy rates, reaching 94.11%. Considering both accuracy rates and training times, we defined the ration between average percentage accuracy and average training time in a reverse order. Our proposal was 50 times higher than the ratio obtained using the best method of the state-of-the-art. As our proposed model can combine high accuracy rate with low learning time, whenever a new data is received, our work will be able to save a lot of time, hours, in learning process in relation to the best method of the state-of-the-art.

* Computer Methods and Programs in Biomedicine, 134 (2016), 11-29 
  
Access Paper or Ask Questions

Deep Learning-Based Automatic Detection of Poorly Positioned Mammograms to Minimize Patient Return Visits for Repeat Imaging: A Real-World Application

Sep 28, 2020
Vikash Gupta, Clayton Taylor, Sarah Bonnet, Luciano M. Prevedello, Jeffrey Hawley, Richard D White, Mona G Flores, Barbaros Selnur Erdal

Screening mammograms are a routine imaging exam performed to detect breast cancer in its early stages to reduce morbidity and mortality attributed to this disease. In order to maximize the efficacy of breast cancer screening programs, proper mammographic positioning is paramount. Proper positioning ensures adequate visualization of breast tissue and is necessary for effective breast cancer detection. Therefore, breast-imaging radiologists must assess each mammogram for the adequacy of positioning before providing a final interpretation of the examination; this often necessitates return patient visits for additional imaging. In this paper, we propose a deep learning-algorithm method that mimics and automates this decision-making process to identify poorly positioned mammograms. Our objective for this algorithm is to assist mammography technologists in recognizing inadequately positioned mammograms real-time, improve the quality of mammographic positioning and performance, and ultimately reducing repeat visits for patients with initially inadequate imaging. The proposed model showed a true positive rate for detecting correct positioning of 91.35% in the mediolateral oblique view and 95.11% in the craniocaudal view. In addition to these results, we also present an automatically generated report which can aid the mammography technologist in taking corrective measures during the patient visit.

* 12 pages, 13 figures, pre-print 
  
Access Paper or Ask Questions

Deep Learning Based Analysis of Prostate Cancer from MP-MRI

Jun 02, 2021
Pedro C. Neto

The diagnosis of prostate cancer faces a problem with overdiagnosis that leads to damaging side effects due to unnecessary treatment. Research has shown that the use of multi-parametric magnetic resonance images to conduct biopsies can drastically help to mitigate the overdiagnosis, thus reducing the side effects on healthy patients. This study aims to investigate the use of deep learning techniques to explore computer-aid diagnosis based on MRI as input. Several diagnosis problems ranging from classification of lesions as being clinically significant or not to the detection and segmentation of lesions are addressed with deep learning based approaches. This thesis tackled two main problems regarding the diagnosis of prostate cancer. Firstly, XmasNet was used to conduct two large experiments on the classification of lesions. Secondly, detection and segmentation experiments were conducted, first on the prostate and afterward on the prostate cancer lesions. The former experiments explored the lesions through a two-dimensional space, while the latter explored models to work with three-dimensional inputs. For this task, the 3D models explored were the 3D U-Net and a pretrained 3D ResNet-18. A rigorous analysis of all these problems was conducted with a total of two networks, two cropping techniques, two resampling techniques, two crop sizes, five input sizes and data augmentations experimented for lesion classification. While for segmentation two models, two input sizes and data augmentations were experimented. However, while the binary classification of the clinical significance of lesions and the detection and segmentation of the prostate already achieve the desired results (0.870 AUC and 0.915 dice score respectively), the classification of the PIRADS score and the segmentation of lesions still have a large margin to improve (0.664 accuracy and 0.690 dice score respectively).

  
Access Paper or Ask Questions
<<
3
4
5
6
7
8
9
10
11
12
13
14
15
>>