Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

Weakly Supervised Multi-Task Learning for Cell Detection and Segmentation

Oct 27, 2019
Alireza Chamanzar, Yao Nie

Cell detection and segmentation is fundamental for all downstream analysis of digital pathology images. However, obtaining the pixel-level ground truth for single cell segmentation is extremely labor intensive. To overcome this challenge, we developed an end-to-end deep learning algorithm to perform both single cell detection and segmentation using only point labels. This is achieved through the combination of different task orientated point label encoding methods and a multi-task scheduler for training. We apply and validate our algorithm on PMS2 stained colon rectal cancer and tonsil tissue images. Compared to the state-of-the-art, our algorithm shows significant improvement in cell detection and segmentation without increasing the annotation efforts.

  

RCNN for Region of Interest Detection in Whole Slide Images

Sep 18, 2020
A Nugaliyadde, Kok Wai Wong, Jeremy Parry, Ferdous Sohel, Hamid Laga, Upeka V. Somaratne, Chris Yeomans, Orchid Foster

Digital pathology has attracted significant attention in recent years. Analysis of Whole Slide Images (WSIs) is challenging because they are very large, i.e., of Giga-pixel resolution. Identifying Regions of Interest (ROIs) is the first step for pathologists to analyse further the regions of diagnostic interest for cancer detection and other anomalies. In this paper, we investigate the use of RCNN, which is a deep machine learning technique, for detecting such ROIs only using a small number of labelled WSIs for training. For experimentation, we used real WSIs from a public hospital pathology service in Western Australia. We used 60 WSIs for training the RCNN model and another 12 WSIs for testing. The model was further tested on a new set of unseen WSIs. The results show that RCNN can be effectively used for ROI detection from WSIs.

* This paper was accepted to the 27th International Conference on Neural Information Processing (ICONIP 2020) and will be published in the Springer CCIS Series 
  

Mitosis Detection Under Limited Annotation: A Joint Learning Approach

Jul 02, 2020
Pushpak Pati, Antonio Foncubierta-Rodriguez, Orcun Goksel, Maria Gabrani

Mitotic counting is a vital prognostic marker of tumor proliferation in breast cancer. Deep learning-based mitotic detection is on par with pathologists, but it requires large labeled data for training. We propose a deep classification framework for enhancing mitosis detection by leveraging class label information, via softmax loss, and spatial distribution information among samples, via distance metric learning. We also investigate strategies towards steadily providing informative samples to boost the learning. The efficacy of the proposed framework is established through evaluation on ICPR 2012 and AMIDA 2013 mitotic data. Our framework significantly improves the detection with small training data and achieves on par or superior performance compared to state-of-the-art methods for using the entire training data.

* 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI) 
  

A Hypersensitive Breast Cancer Detector

Jan 23, 2020
Stefano Pedemonte, Brent Mombourquette, Alexis Goh, Trevor Tsue, Aaron Long, Sadanand Singh, Thomas Paul Matthews, Meet Shah, Jason Su

Early detection of breast cancer through screening mammography yields a 20-35% increase in survival rate; however, there are not enough radiologists to serve the growing population of women seeking screening mammography. Although commercial computer aided detection (CADe) software has been available to radiologists for decades, it has failed to improve the interpretation of full-field digital mammography (FFDM) images due to its low sensitivity over the spectrum of findings. In this work, we leverage a large set of FFDM images with loose bounding boxes of mammographically significant findings to train a deep learning detector with extreme sensitivity. Building upon work from the Hourglass architecture, we train a model that produces segmentation-like images with high spatial resolution, with the aim of producing 2D Gaussian blobs centered on ground-truth boxes. We replace the pixel-wise $L_2$ norm with a weak-supervision loss designed to achieve high sensitivity, asymmetrically penalizing false positives and false negatives while softening the noise of the loose bounding boxes by permitting a tolerance in misaligned predictions. The resulting system achieves a sensitivity for malignant findings of 0.99 with only 4.8 false positive markers per image. When utilized in a CADe system, this model could enable a novel workflow where radiologists can focus their attention with trust on only the locations proposed by the model, expediting the interpretation process and bringing attention to potential findings that could otherwise have been missed. Due to its nearly perfect sensitivity, the proposed detector can also be used as a high-performance proposal generator in two-stage detection systems.

* SPIE Medical Imaging 2020 
  

Visualizing CoAtNet Predictions for Aiding Melanoma Detection

May 21, 2022
Daniel Kvak

Melanoma is considered to be the most aggressive form of skin cancer. Due to the similar shape of malignant and benign cancerous lesions, doctors spend considerably more time when diagnosing these findings. At present, the evaluation of malignancy is performed primarily by invasive histological examination of the suspicious lesion. Developing an accurate classifier for early and efficient detection can minimize and monitor the harmful effects of skin cancer and increase patient survival rates. This paper proposes a multi-class classification task using the CoAtNet architecture, a hybrid model that combines the depthwise convolution matrix operation of traditional convolutional neural networks with the strengths of Transformer models and self-attention mechanics to achieve better generalization and capacity. The proposed multi-class classifier achieves an overall precision of 0.901, recall 0.895, and AP 0.923, indicating high performance compared to other state-of-the-art networks.

  

Multi-Task Lung Nodule Detection in Chest Radiographs with a Dual Head Network

Jul 07, 2022
Chen-Han Tsai, Yu-Shao Peng

Lung nodules can be an alarming precursor to potential lung cancer. Missed nodule detections during chest radiograph analysis remains a common challenge among thoracic radiologists. In this work, we present a multi-task lung nodule detection algorithm for chest radiograph analysis. Unlike past approaches, our algorithm predicts a global-level label indicating nodule presence along with local-level labels predicting nodule locations using a Dual Head Network (DHN). We demonstrate the favorable nodule detection performance that our multi-task formulation yields in comparison to conventional methods. In addition, we introduce a novel Dual Head Augmentation (DHA) strategy tailored for DHN, and we demonstrate its significance in further enhancing global and local nodule predictions.

* 11 pages, 3 figures, Accepted to the MICCAI Conference 2022 
  

Review on Computer Vision in Gastric Cancer: Potential Efficient Tools for Diagnosis

May 31, 2020
Yihua Sun

Rapid diagnosis of gastric cancer is a great challenge for clinical doctors. Dramatic progress of computer vision on gastric cancer has been made recently and this review focuses on advances during the past five years. Different methods for data generation and augmentation are presented, and various approaches to extract discriminative features compared and evaluated. Classification and segmentation techniques are carefully discussed for assisting more precise diagnosis and timely treatment. For classification, various methods have been developed to better proceed specific images, such as images with rotation and estimated real-timely (endoscopy), high resolution images (histopathology), low diagnostic accuracy images (X-ray), poor contrast images of the soft-tissue with cavity (CT) or those images with insufficient annotation. For detection and segmentation, traditional methods and machine learning methods are compared. Application of those methods will greatly reduce the labor and time consumption for the diagnosis of gastric cancers.

  

Infinite Curriculum Learning for Efficiently Detecting Gastric Ulcers in WCE Images

Sep 07, 2018
Xiaolu Zhang, Shiwan Zhao, Lingxi Xie

The Wireless Capsule Endoscopy (WCE) is becoming a popular way of screening gastrointestinal system diseases and cancer. However, the time-consuming process in inspecting WCE data limits its applications and increases the cost of examinations. This paper considers WCE-based gastric ulcer detection, in which the major challenge is to detect the lesions in a local region. We propose an approach named infinite curriculum learning, which generalizes curriculum learning to an infinite sampling space by approximately measuring the difficulty of each patch by its scale. This allows us to adapt our model from local patches to global images gradually, leading to a consistent accuracy gain. Experiments are performed on a large dataset with more than 3 million WCE images. Our approach achieves a binary classification accuracy of 87%, and is able to detect some lesions mis-annotated by the physicians. In a real-world application, our approach can reduce the workload of a physician by 90%-98% in gastric ulcer screening.

* 9 pages, 4 figures 
  
<<
28
29
30
31
32
33
34
35
36
37
38
39
40
>>