Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

End-to-End Discriminative Deep Network for Liver Lesion Classification

Jan 28, 2019
Francisco Perdigon Romero, Andre Diler, Gabriel Bisson-Gregoire, Simon Turcotte, Real Lapointe, Franck Vandenbroucke-Menu, An Tang, Samuel Kadoury

Colorectal liver metastasis is one of most aggressive liver malignancies. While the definition of lesion type based on CT images determines the diagnosis and therapeutic strategy, the discrimination between cancerous and non-cancerous lesions are critical and requires highly skilled expertise, experience and time. In the present work we introduce an end-to-end deep learning approach to assist in the discrimination between liver metastases from colorectal cancer and benign cysts in abdominal CT images of the liver. Our approach incorporates the efficient feature extraction of InceptionV3 combined with residual connections and pre-trained weights from ImageNet. The architecture also includes fully connected classification layers to generate a probabilistic output of lesion type. We use an in-house clinical biobank with 230 liver lesions originating from 63 patients. With an accuracy of 0.96 and a F1-score of 0.92, the results obtained with the proposed approach surpasses state of the art methods. Our work provides the basis for incorporating machine learning tools in specialized radiology software to assist physicians in the early detection and treatment of liver lesions.

  

A Clinically Inspired Approach for Melanoma classification

Jun 15, 2021
Prathyusha Akundi, Soumyasis Gun, Jayanthi Sivaswamy

Melanoma is a leading cause of deaths due to skin cancer deaths and hence, early and effective diagnosis of melanoma is of interest. Current approaches for automated diagnosis of melanoma either use pattern recognition or analytical recognition like ABCDE (asymmetry, border, color, diameter and evolving) criterion. In practice however, a differential approach wherein outliers (ugly duckling) are detected and used to evaluate nevi/lesions. Incorporation of differential recognition in Computer Aided Diagnosis (CAD) systems has not been explored but can be beneficial as it can provide a clinical justification for the derived decision. We present a method for identifying and quantifying ugly ducklings by performing Intra-Patient Comparative Analysis (IPCA) of neighboring nevi. This is then incorporated in a CAD system design for melanoma detection. This design ensures flexibility to handle cases where IPCA is not possible. Our experiments on a public dataset show that the outlier information helps boost the sensitivity of detection by at least 4.1 % and specificity by 4.0 % to 8.9 %, depending on the use of a strong (EfficientNet) or moderately strong (VGG or ResNet) classifier.

* 5 pages, 3 figures, 1 table 
  

Analysis of skin lesion images with deep learning

Jan 11, 2021
Josef Steppan, Sten Hanke

Skin cancer is the most common cancer worldwide, with melanoma being the deadliest form. Dermoscopy is a skin imaging modality that has shown an improvement in the diagnosis of skin cancer compared to visual examination without support. We evaluate the current state of the art in the classification of dermoscopic images based on the ISIC-2019 Challenge for the classification of skin lesions and current literature. Various deep neural network architectures pre-trained on the ImageNet data set are adapted to a combined training data set comprised of publicly available dermoscopic and clinical images of skin lesions using transfer learning and model fine-tuning. The performance and applicability of these models for the detection of eight classes of skin lesions are examined. Real-time data augmentation, which uses random rotation, translation, shear, and zoom within specified bounds is used to increase the number of available training samples. Model predictions are multiplied by inverse class frequencies and normalized to better approximate actual probability distributions. Overall prediction accuracy is further increased by using the arithmetic mean of the predictions of several independently trained models. The best single model has been published as a web service.

* for source code see: http://github.com/j05t/lesion-analysis 
  

Deep Multi-Modal Classification of Intraductal Papillary Mucinous Neoplasms (IPMN) with Canonical Correlation Analysis

Apr 27, 2018
Sarfaraz Hussein, Pujan Kandel, Juan E. Corral, Candice W. Bolan, Michael B. Wallace, Ulas Bagci

Pancreatic cancer has the poorest prognosis among all cancer types. Intraductal Papillary Mucinous Neoplasms (IPMNs) are radiographically identifiable precursors to pancreatic cancer; hence, early detection and precise risk assessment of IPMN are vital. In this work, we propose a Convolutional Neural Network (CNN) based computer aided diagnosis (CAD) system to perform IPMN diagnosis and risk assessment by utilizing multi-modal MRI. In our proposed approach, we use minimum and maximum intensity projections to ease the annotation variations among different slices and type of MRIs. Then, we present a CNN to obtain deep feature representation corresponding to each MRI modality (T1-weighted and T2-weighted). At the final step, we employ canonical correlation analysis (CCA) to perform a fusion operation at the feature level, leading to discriminative canonical correlation features. Extracted features are used for classification. Our results indicate significant improvements over other potential approaches to solve this important problem. The proposed approach doesn't require explicit sample balancing in cases of imbalance between positive and negative examples. To the best of our knowledge, our study is the first to automatically diagnose IPMN using multi-modal MRI.

* Accepted for publication in IEEE International Symposium on Biomedical Imaging (ISBI) 2018 
  

Mass Segmentation in Automated 3-D Breast Ultrasound Using Dual-Path U-net

Sep 29, 2021
Hamed Fayyaz, Ehsan Kozegar, Tao Tan, Mohsen Soryani

Automated 3-D breast ultrasound (ABUS) is a newfound system for breast screening that has been proposed as a supplementary modality to mammography for breast cancer detection. While ABUS has better performance in dense breasts, reading ABUS images is exhausting and time-consuming. So, a computer-aided detection system is necessary for interpretation of these images. Mass segmentation plays a vital role in the computer-aided detection systems and it affects the overall performance. Mass segmentation is a challenging task because of the large variety in size, shape, and texture of masses. Moreover, an imbalanced dataset makes segmentation harder. A novel mass segmentation approach based on deep learning is introduced in this paper. The deep network that is used in this study for image segmentation is inspired by U-net, which has been used broadly for dense segmentation in recent years. The system's performance was determined using a dataset of 50 masses including 38 malign and 12 benign lesions. The proposed segmentation method attained a mean Dice of 0.82 which outperformed a two-stage supervised edge-based method with a mean Dice of 0.74 and an adaptive region growing method with a mean Dice of 0.65.

  

Improving Prognostic Value of CT Deep Radiomic Features in Pancreatic Ductal Adenocarcinoma Using Transfer Learning

May 23, 2019
Yucheng Zhang, Edrise M. Lobo-Mueller, Paul Karanicolas, Steven Gallinger, Masoom A. Haider, Farzad Khalvati

Pancreatic ductal adenocarcinoma (PDAC) is one of the most aggressive cancers with an extremely poor prognosis. Radiomics has shown prognostic ability in multiple types of cancer including PDAC. However, the prognostic value of traditional radiomics pipelines, which are based on hand-crafted radiomic features alone, is limited due to multicollinearity of features and multiple testing problem, and limited performance of conventional machine learning classifiers. Deep learning architectures, such as convolutional neural networks (CNNs), have been shown to outperform traditional techniques in computer vision tasks, such as object detection. However, they require large sample sizes for training which limits their development. As an alternative solution, CNN-based transfer learning has shown the potential for achieving reasonable performance using datasets with small sample sizes. In this work, we developed a CNN-based transfer learning approach for prognostication in PDAC patients for overall survival. The results showed that transfer learning approach outperformed the traditional radiomics model on PDAC data. A transfer learning approach may fill the gap between radiomics and deep learning analytics for cancer prognosis and improve performance beyond what CNNs can achieve using small datasets.

  

Peer Learning for Skin Lesion Classification

Mar 08, 2021
Tariq Bdair, Nassir Navab, Shadi Albarqouni

Skin cancer is one of the most deadly cancers worldwide. Yet, it can be reduced by early detection. Recent deep-learning methods have shown a dermatologist-level performance in skin cancer classification. Yet, this success demands a large amount of centralized data, which is oftentimes not available. Federated learning has been recently introduced to train machine learning models in a privacy-preserved distributed fashion demanding annotated data at the clients, which is usually expensive and not available, especially in the medical field. To this end, we propose FedPerl, a semi-supervised federated learning method that utilizes peer learning from social sciences and ensemble averaging from committee machines to build communities and encourage its members to learn from each other such that they produce more accurate pseudo labels. We also propose the peer anonymization (PA) technique as a core component of FedPerl. PA preserves privacy and reduces the communication cost while maintaining the performance without additional complexity. We validated our method on 38,000 skin lesion images collected from 4 publicly available datasets. FedPerl achieves superior performance over the baselines and state-of-the-art SSFL by 15.8%, and 1.8% respectively. Further, FedPerl shows less sensitivity to noisy clients.

  

Learning Permutation Invariant Representations using Memory Networks

Nov 18, 2019
Shivam Kalra, Mohammed Adnan, Graham Taylor, Hamid Tizhoosh

Many real world tasks such as 3D object detection and high-resolution image classification involve learning from a set of instances. In these cases, only a group of instances, a set, collectively contains meaningful information and therefore only the sets have labels, and not individual data instances. In this work, we present a permutation invariant neural network called a \textbf{Memory-based Exchangeable Model (MEM)} for learning set functions. The model consists of memory units that embed an input sequence to high-level features (memories) enabling the model to learn inter-dependencies among instances of the set in the form of attention vectors. To demonstrate its learning ability, we evaluated our model on test datasets created using MNIST, point cloud classification, and population estimation. We also tested the model for classifying histopathology whole slide images to discriminate between two subtypes of Lung cancer---Lung Adenocarcinoma, and Lung Squamous Cell Carcinoma. We systematically extracted patches from lung cancer images from The Cancer Genome Atlas~(TCGA) dataset, the largest public repository of histopathology images. The proposed method achieved a competitive classification accuracy of 84.84\%. The results on other datasets are promising and demonstrate the efficacy of our model.

  

Optimize transfer learning for lung diseases in bronchoscopy using a new concept: sequential fine-tuning

Feb 10, 2018
Tao Tan, Zhang Li, Haixia Liu, Ping Liu, Wenfang Tang, Hui Li, Yue Sun, Yusheng Yan, Keyu Li, Tao Xu, Shanshan Wan, Ke Lou, Jun Xu, Huiming Ying, Quchang Ouyang, Yuling Tang, Zheyu Hu, Qiang Li

Bronchoscopy inspection as a follow-up procedure from the radiological imaging plays a key role in lung disease diagnosis and determining treatment plans for the patients. Doctors needs to make a decision whether to biopsy the patients timely when performing bronchoscopy. However, the doctors also needs to be very selective with biopsies as biopsies may cause uncontrollable bleeding of the lung tissue which is life-threaten. To help doctors to be more selective on biopsies and provide a second opinion on diagnosis, in this work, we propose a computer-aided diagnosis (CAD) system for lung diseases including cancers and tuberculosis (TB). The system is developed based on transfer learning. We propose a novel transfer learning method: sentential fine-tuning . Compared to traditional fine-tuning methods, our methods achieves the best performance. We obtained a overall accuracy of 77.0% a dataset of 81 normal cases, 76 tuberculosis cases and 277 lung cancer cases while the other traditional transfer learning methods achieve an accuracy of 73% and 68%. . The detection accuracy of our method for cancers, TB and normal cases are 87%, 54% and 91% respectively. This indicates that the CAD system has potential to improve lung disease diagnosis accuracy in bronchoscopy and it also might be used to be more selective with biopsies.

  
<<
41
42
43
44
45
46
47
48
49
50
>>