Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

Multi-stream Faster RCNN for Mitosis Counting in Breast Cancer Images

Feb 01, 2020
Robin Elizabeth Yancey

Mitotic count is a commonly used method to assess the level of progression of breast cancer, which is now the fourth most prevalent cancer. Unfortunately, counting mitosis is a tedious and subjective task with poor reproducibility, especially for non-experts. Luckily, since the machine can read and compare more data with greater efficiency this could be the next modern technique to count mitosis. Furthermore, technological advancements in medicine have led to the increase in image data available for use in training. In this work, we propose a network constructed using a similar approach to one that has been used for image fraud detection with the segmented image map as the second stream input to Faster RCNN. This region-based detection model combines a fully convolutional Region Proposal Network to generate proposals and a classification network to classify each of these proposals as containing mitosis or not. Features from both streams are fused in the bilinear pooling layer to maintain the spatial concurrence of each. After training this model on the ICPR 2014 MITOSIS contest dataset, we received an F-measure score of 0.507, higher than both the winners score and scores from recent tests on the same data. Our method is clinically applicable, taking only around five min per ten full High Power Field slides when tested on a Quadro P6000 cloud GPU.

  
Access Paper or Ask Questions

Deep learning for brain metastasis detection and segmentation in longitudinal MRI data

Dec 28, 2021
Yixing Huang, Christoph Bert, Philipp Sommer, Benjamin Frey, Udo Gaipl, Luitpold V. Distel, Thomas Weissmann, Michael Uder, Manuel A. Schmidt, Arnd Dörfler, Andreas Maier, Rainer Fietkau, Florian Putz

Brain metastases occur frequently in patients with metastatic cancer. Early and accurate detection of brain metastases is very essential for treatment planning and prognosis in radiation therapy. To improve brain metastasis detection performance with deep learning, a custom detection loss called volume-level sensitivity-specificity (VSS) is proposed, which rates individual metastasis detection sensitivity and specificity in (sub-)volume levels. As sensitivity and precision are always a trade-off in a metastasis level, either a high sensitivity or a high precision can be achieved by adjusting the weights in the VSS loss without decline in dice score coefficient for segmented metastases. To reduce metastasis-like structures being detected as false positive metastases, a temporal prior volume is proposed as an additional input of the neural network. Our proposed VSS loss improves the sensitivity of brain metastasis detection, increasing the sensitivity from 86.7% to 95.5%. Alternatively, it improves the precision from 68.8% to 97.8%. With the additional temporal prior volume, about 45% of the false positive metastases are reduced in the high sensitivity model and the precision reaches 99.6% for the high specificity model. The mean dice coefficient for all metastases is about 0.81. With the ensemble of the high sensitivity and high specificity models, on average only 1.5 false positive metastases per patient needs further check, while the majority of true positive metastases are confirmed. The ensemble learning is able to distinguish high confidence true positive metastases from metastases candidates that require special expert review or further follow-up, being particularly well-fit to the requirements of expert support in real clinical practice.

  
Access Paper or Ask Questions

Cribriform pattern detection in prostate histopathological images using deep learning models

Oct 09, 2019
Malay Singh, Emarene Mationg Kalaw, Wang Jie, Mundher Al-Shabi, Chin Fong Wong, Danilo Medina Giron, Kian-Tai Chong, Maxine Tan, Zeng Zeng, Hwee Kuan Lee

Architecture, size, and shape of glands are most important patterns used by pathologists for assessment of cancer malignancy in prostate histopathological tissue slides. Varying structures of glands along with cumbersome manual observations may result in subjective and inconsistent assessment. Cribriform gland with irregular border is an important feature in Gleason pattern 4. We propose using deep neural networks for cribriform pattern classification in prostate histopathological images. $163708$ Hematoxylin and Eosin (H\&E) stained images were extracted from histopathologic tissue slides of $19$ patients with prostate cancer and annotated for cribriform patterns. Our automated image classification system analyses the H\&E images to classify them as either `Cribriform' or `Non-cribriform'. Our system uses various deep learning approaches and hand-crafted image pixel intensity-based features. We present our results for cribriform pattern detection across various parameters and configuration allowed by our system. The combination of fine-tuned deep learning models outperformed the state-of-art nuclei feature based methods. Our image classification system achieved the testing accuracy of $85.93~\pm~7.54$ (cross-validated) and $88.04~\pm~5.63$ ( additional unseen test set) across three folds. In this paper, we present an annotated cribriform dataset along with analysis of deep learning models and hand-crafted features for cribriform pattern detection in prostate histopathological images.

* 21 pages, 4 figures, 6 tables 
  
Access Paper or Ask Questions

PathoNet: Deep learning assisted evaluation of Ki-67 and tumor infiltrating lymphocytes (TILs) as prognostic factors in breast cancer; A large dataset and baseline

Oct 20, 2020
Farzin Negahbani, Rasool Sabzi, Bita Pakniyat Jahromi, Fatemeh Movahedi, Mahsa Kohandel Shirazi, Shayan Majidi, Dena Firouzabadi, Amirreza Dehghanian

The nuclear protein Ki-67 and Tumor infiltrating lymphocytes (TILs) have been introduced as prognostic factors in predicting tumor progression and its treatment response. The value of the Ki-67 index and TILs in approach to heterogeneous tumors such as Breast cancer (BC), known as the most common cancer in women worldwide, has been highlighted in the literature. Due to the indeterminable and subjective nature of Ki-67 as well as TILs scoring, automated methods using machine learning, specifically approaches based on deep learning, have attracted attention. Yet, deep learning methods need considerable annotated data. In the absence of publicly available benchmarks for BC Ki-67 stained cell detection and further annotated classification of cells, we propose SHIDC-BC-Ki-67 as a dataset for the aforementioned purpose. We also introduce a novel pipeline and a backend, namely PathoNet for Ki-67 immunostained cell detection and classification and simultaneous determination of intratumoral TILs score. Further, we show that despite facing challenges, our proposed backend, PathoNet, outperforms the state of the art methods proposed to date in the harmonic mean measure.

  
Access Paper or Ask Questions

ConCORDe-Net: Cell Count Regularized Convolutional Neural Network for Cell Detection in Multiplex Immunohistochemistry Images

Aug 01, 2019
Yeman Brhane Hagos, Priya Lakshmi Narayanan, Ayse U. Akarca, Teresa Marafioti, Yinyin Yuan

In digital pathology, cell detection and classification are often prerequisites to quantify cell abundance and explore tissue spatial heterogeneity. However, these tasks are particularly challenging for multiplex immunohistochemistry (mIHC) images due to high levels of variability in staining, expression intensity, and inherent noise as a result of preprocessing artefacts. We proposed a deep learning method to detect and classify cells in mIHC whole-tumour slide images of breast cancer. Inspired by inception-v3, we developed Cell COunt RegularizeD Convolutional neural Network (ConCORDe-Net) which integrates conventional dice overlap and a new cell count loss function for optimizing cell detection, followed by a multi-stage convolutional neural network for cell classification. In total, 20447 cells, belonging to five cell classes were annotated by experts from 175 patches extracted from 6 whole-tumour mIHC images. These patches were randomly split into training, validation and testing sets. Using ConCORDe-Net, we obtained a cell detection F1 score of 0.873, which is the best score compared to three state of the art methods. In particular, ConCORDe-Net excels at detecting closely located and weakly stained cells compared to other methods. Incorporating cell count loss in the objective function regularizes the network to learn weak gradient boundaries and separate weakly stained cells from background artefacts. Moreover, cell classification accuracy of 96.5% was achieved. These results support that incorporating problem-specific knowledge such as cell count into deep learning-based cell detection architectures improve the robustness of the algorithm.

* MICCAI2019 accepted, 3 figures,8.5 pages 
  
Access Paper or Ask Questions

DeepLung: 3D Deep Convolutional Nets for Automated Pulmonary Nodule Detection and Classification

Sep 16, 2017
Wentao Zhu, Chaochun Liu, Wei Fan, Xiaohui Xie

In this work, we present a fully automated lung CT cancer diagnosis system, DeepLung. DeepLung contains two parts, nodule detection and classification. Considering the 3D nature of lung CT data, two 3D networks are designed for the nodule detection and classification respectively. Specifically, a 3D Faster R-CNN is designed for nodule detection with a U-net-like encoder-decoder structure to effectively learn nodule features. For nodule classification, gradient boosting machine (GBM) with 3D dual path network (DPN) features is proposed. The nodule classification subnetwork is validated on a public dataset from LIDC-IDRI, on which it achieves better performance than state-of-the-art approaches, and surpasses the average performance of four experienced doctors. For the DeepLung system, candidate nodules are detected first by the nodule detection subnetwork, and nodule diagnosis is conducted by the classification subnetwork. Extensive experimental results demonstrate the DeepLung is comparable to the experienced doctors both for the nodule-level and patient-level diagnosis on the LIDC-IDRI dataset.

  
Access Paper or Ask Questions

Skin Lesion Analyser: An Efficient Seven-Way Multi-Class Skin Cancer Classification Using MobileNet

Aug 03, 2019
Saket S. Chaturvedi, Kajol Gupta, Prakash. S. Prasad

Skin cancer, a major form of cancer, is a critical public health problem with 123,000 newly diagnosed melanoma cases and between 2 and 3 million non-melanoma cases worldwide each year. The leading cause of skin cancer is high exposure of skin cells to UV radiation, which can damage the DNA inside skin cells leading to uncontrolled growth of skin cells. Skin cancer is primarily diagnosed visually employing clinical screening, a biopsy, dermoscopic analysis, and histopathological examination. It has been demonstrated that the dermoscopic analysis in the hands of inexperienced dermatologists may cause a reduction in diagnostic accuracy. Early detection and screening of skin cancer have the potential to reduce mortality and morbidity. Previous studies have shown Deep Learning ability to perform better than human experts in several visual recognition tasks. In this paper, we propose an efficient seven-way automated multi-class skin cancer classification system having performance comparable with expert dermatologists. We used a pretrained MobileNet model to train over HAM10000 dataset using transfer learning. The model classifies skin lesion image with a categorical accuracy of 83.1 percent, top2 accuracy of 91.36 percent and top3 accuracy of 95.34 percent. The weighted average of precision, recall, and f1-score were found to be 0.89, 0.83, and 0.83 respectively. The model has been deployed as a web application for public use at (https://saketchaturvedi.github.io). This fast, expansible method holds the potential for substantial clinical impact, including broadening the scope of primary care practice and augmenting clinical decision-making for dermatology specialists.

* 12 pages, 4 figures, and 2 tables 
  
Access Paper or Ask Questions
<<
17
18
19
20
21
22
23
24
25
26
27
28
29
>>