Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

A Comparative Study on Polyp Classification using Convolutional Neural Networks

Jul 12, 2020
Krushi Patel, Kaidong Li, Ke Tao, Quan Wang, Ajay Bansal, Amit Rastogi, Guanghui Wang

Colorectal cancer is the third most common cancer diagnosed in both men and women in the United States. Most colorectal cancers start as a growth on the inner lining of the colon or rectum, called 'polyp'. Not all polyps are cancerous, but some can develop into cancer. Early detection and recognition of the type of polyps is critical to prevent cancer and change outcomes. However, visual classification of polyps is challenging due to varying illumination conditions of endoscopy, variant texture, appearance, and overlapping morphology between polyps. More importantly, evaluation of polyp patterns by gastroenterologists is subjective leading to a poor agreement among observers. Deep convolutional neural networks have proven very successful in object classification across various object categories. In this work, we compare the performance of the state-of-the-art general object classification models for polyp classification. We trained a total of six CNN models end-to-end using a dataset of 157 video sequences composed of two types of polyps: hyperplastic and adenomatous. Our results demonstrate that the state-of-the-art CNN models can successfully classify polyps with an accuracy comparable or better than reported among gastroenterologists. The results of this study can guide future research in polyp classification.

  

Gleason Grading of Histology Prostate Images through Semantic Segmentation via Residual U-Net

May 22, 2020
Amartya Kalapahar, Julio Silva-Rodríguez, Adrián Colomer, Fernando López-Mir, Valery Naranjo

Worldwide, prostate cancer is one of the main cancers affecting men. The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists. Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue via computer-vision algorithms in order to support the physicians' task. The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue according to the full Gleason system. This model outperforms other well-known architectures, and reaches a pixel-level Cohen's quadratic Kappa of 0.52, at the level of previous image-level works in the literature, but providing also a detailed localisation of the patterns.

  

Learning Pain from Action Unit Combinations: A Weakly Supervised Approach via Multiple Instance Learning

Feb 20, 2018
Zhanli Chen, Rashid Ansari, Diana J. Wilkie

Patient pain can be detected highly reliably from facial expressions using a set of facial muscle-based action units (AUs) defined by the Facial Action Coding System (FACS). A key characteristic of facial expression of pain is the simultaneous occurrence of pain-related AU combinations, whose automated detection would be highly beneficial for efficient and practical pain monitoring. Existing general Automated Facial Expression Recognition (AFER) systems prove inadequate when applied specifically for detecting pain as they either focus on detecting individual pain-related AUs but not on combinations or they seek to bypass AU detection by training a binary pain classifier directly on pain intensity data but are limited by lack of enough labeled data for satisfactory training. In this paper, we propose a new approach that mimics the strategy of human coders of decoupling pain detection into two consecutive tasks: one performed at the individual video-frame level and the other at video-sequence level. Using state-of-the-art AFER tools to detect single AUs at the frame level, we propose two novel data structures to encode AU combinations from single AU scores. Two weakly supervised learning frameworks namely multiple instance learning (MIL) and multiple clustered instance learning (MCIL) are employed corresponding to each data structure to learn pain from video sequences. Experimental results show an 87% pain recognition accuracy with 0.94 AUC (Area Under Curve) on the UNBC-McMaster Shoulder Pain Expression dataset. Tests on long videos in a lung cancer patient video dataset demonstrates the potential value of the proposed system for pain monitoring in clinical settings.

  

Mammograms Classification: A Review

Mar 04, 2022
Marawan Elbatel

An advanced reliable low-cost form of screening method, Digital mammography has been used as an effective imaging method for breast cancer detection. With an increased focus on technologies to aid healthcare, Mammogram images have been utilized in developing computer-aided diagnosis systems that will potentially help in clinical diagnosis. Researchers have proved that artificial intelligence with its emerging technologies can be used in the early detection of the disease and improve radiologists' performance in assessing breast cancer. In this paper, we review the methods developed for mammogram mass classification in two categories. The first one is classifying manually provided cropped region of interests (ROI) as either malignant or benign, and the second one is the classification of automatically segmented ROIs as either malignant or benign. We also provide an overview of datasets and evaluation metrics used in the classification task. Finally, we compare and discuss the deep learning approach to classical image processing and learning approach in this domain.

  

Superpixel Based Segmentation and Classification of Polyps in Wireless Capsule Endoscopy

May 28, 2018
Omid Haji Maghsoudi

Wireless Capsule Endoscopy (WCE) is a relatively new technology to record the entire GI trace, in vivo. The large amounts of frames captured during an examination cause difficulties for physicians to review all these frames. The need for reducing the reviewing time using some intelligent methods has been a challenge. Polyps are considered as growing tissues on the surface of intestinal tract not inside of an organ. Most polyps are not cancerous, but if one becomes larger than a centimeter, it can turn into cancer by great chance. The WCE frames provide the early stage possibility for detection of polyps. Here, the application of simple linear iterative clustering (SLIC) superpixel for segmentation of polyps in WCE frames is evaluated. Different SLIC superpixel numbers are examined to find the highest sensitivity for detection of polyps. The SLIC superpixel segmentation is promising to improve the results of previous studies. Finally, the superpixels were classified using a support vector machine (SVM) by extracting some texture and color features. The classification results showed a sensitivity of 91%.

* This paper has been published in SPMB 2017 
  

Machine learning approach for segmenting glands in colon histology images using local intensity and texture features

May 15, 2019
Rupali Khatun, Soumick Chatterjee

Colon Cancer is one of the most common types of cancer. The treatment is planned to depend on the grade or stage of cancer. One of the preconditions for grading of colon cancer is to segment the glandular structures of tissues. Manual segmentation method is very time-consuming, and it leads to life risk for the patients. The principal objective of this project is to assist the pathologist to accurate detection of colon cancer. In this paper, the authors have proposed an algorithm for an automatic segmentation of glands in colon histology using local intensity and texture features. Here the dataset images are cropped into patches with different window sizes and taken the intensity of those patches, and also calculated texture-based features. Random forest classifier has been used to classify this patch into different labels. A multilevel random forest technique in a hierarchical way is proposed. This solution is fast, accurate and it is very much applicable in a clinical setup.

* 8th International Advance Computing Conference (IACC), 2018 
  

Towards a Complete Pipeline for Segmenting Nuclei in Feulgen-Stained Images

Feb 19, 2020
Luiz Antonio Buschetto Macarini, Aldo von Wangenheim, Felipe Perozzo Daltoé, Alexandre Sherlley Casimiro Onofre, Fabiana Botelho de Miranda Onofre, Marcelo Ricardo Stemmer

Cervical cancer is the second most common cancer type in women around the world. In some countries, due to non-existent or inadequate screening, it is often detected at late stages, making standard treatment options often absent or unaffordable. It is a deadly disease that could benefit from early detection approaches. It is usually done by cytological exams which consist of visually inspecting the nuclei searching for morphological alteration. Since it is done by humans, naturally, some subjectivity is introduced. Computational methods could be used to reduce this, where the first stage of the process would be the nuclei segmentation. In this context, we present a complete pipeline for the segmentation of nuclei in Feulgen-stained images using Convolutional Neural Networks. Here we show the entire process of segmentation, since the collection of the samples, passing through pre-processing, training the network, post-processing and results evaluation. We achieved an overall IoU of 0.78, showing the affordability of the approach of nuclei segmentation on Feulgen-stained images. The code is available in: https://github.com/luizbuschetto/feulgen_nuclei_segmentation.

* 7 pages, 8 figures (Figure 2 with "a" and "b"), conference paper accepted for presentation in XI Computer on the Beach (https://www.computeronthebeach.com.br/
  

Computed Tomography Image Enhancement using 3D Convolutional Neural Network

Jul 18, 2018
Meng Li, Shiwen Shen, Wen Gao, William Hsu, Jason Cong

Computed tomography (CT) is increasingly being used for cancer screening, such as early detection of lung cancer. However, CT studies have varying pixel spacing due to differences in acquisition parameters. Thick slice CTs have lower resolution, hindering tasks such as nodule characterization during computer-aided detection due to partial volume effect. In this study, we propose a novel 3D enhancement convolutional neural network (3DECNN) to improve the spatial resolution of CT studies that were acquired using lower resolution/slice thicknesses to higher resolutions. Using a subset of the LIDC dataset consisting of 20,672 CT slices from 100 scans, we simulated lower resolution/thick section scans then attempted to reconstruct the original images using our 3DECNN network. A significant improvement in PSNR (29.3087dB vs. 28.8769dB, p-value < 2.2e-16) and SSIM (0.8529dB vs. 0.8449dB, p-value < 2.2e-16) compared to other state-of-art deep learning methods is observed.

  

Microscope 2.0: An Augmented Reality Microscope with Real-time Artificial Intelligence Integration

Dec 04, 2018
Po-Hsuan Cameron Chen, Krishna Gadepalli, Robert MacDonald, Yun Liu, Kunal Nagpal, Timo Kohlberger, Jeffrey Dean, Greg S. Corrado, Jason D. Hipp, Martin C. Stumpe

The brightfield microscope is instrumental in the visual examination of both biological and physical samples at sub-millimeter scales. One key clinical application has been in cancer histopathology, where the microscopic assessment of the tissue samples is used for the diagnosis and staging of cancer and thus guides clinical therapy. However, the interpretation of these samples is inherently subjective, resulting in significant diagnostic variability. Moreover, in many regions of the world, access to pathologists is severely limited due to lack of trained personnel. In this regard, Artificial Intelligence (AI) based tools promise to improve the access and quality of healthcare. However, despite significant advances in AI research, integration of these tools into real-world cancer diagnosis workflows remains challenging because of the costs of image digitization and difficulties in deploying AI solutions. Here we propose a cost-effective solution to the integration of AI: the Augmented Reality Microscope (ARM). The ARM overlays AI-based information onto the current view of the sample through the optical pathway in real-time, enabling seamless integration of AI into the regular microscopy workflow. We demonstrate the utility of ARM in the detection of lymph node metastases in breast cancer and the identification of prostate cancer with a latency that supports real-time workflows. We anticipate that ARM will remove barriers towards the use of AI in microscopic analysis and thus improve the accuracy and efficiency of cancer diagnosis. This approach is applicable to other microscopy tasks and AI algorithms in the life sciences and beyond.

  

Neural Transformers for Intraductal Papillary Mucosal Neoplasms (IPMN) Classification in MRI images

Jun 21, 2022
Federica Proietto Salanitri, Giovanni Bellitto, Simone Palazzo, Ismail Irmakci, Michael B. Wallace, Candice W. Bolan, Megan Engels, Sanne Hoogenboom, Marco Aldinucci, Ulas Bagci, Daniela Giordano, Concetto Spampinato

Early detection of precancerous cysts or neoplasms, i.e., Intraductal Papillary Mucosal Neoplasms (IPMN), in pancreas is a challenging and complex task, and it may lead to a more favourable outcome. Once detected, grading IPMNs accurately is also necessary, since low-risk IPMNs can be under surveillance program, while high-risk IPMNs have to be surgically resected before they turn into cancer. Current standards (Fukuoka and others) for IPMN classification show significant intra- and inter-operator variability, beside being error-prone, making a proper diagnosis unreliable. The established progress in artificial intelligence, through the deep learning paradigm, may provide a key tool for an effective support to medical decision for pancreatic cancer. In this work, we follow this trend, by proposing a novel AI-based IPMN classifier that leverages the recent success of transformer networks in generalizing across a wide variety of tasks, including vision ones. We specifically show that our transformer-based model exploits pre-training better than standard convolutional neural networks, thus supporting the sought architectural universalism of transformers in vision, including the medical image domain and it allows for a better interpretation of the obtained results.

  
<<
33
34
35
36
37
38
39
40
41
42
43
44
45
>>