Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

Random Forest as a Tumour Genetic Marker Extractor

Nov 26, 2019
Raquel Pérez-Arnal, Dario Garcia-Gasulla, David Torrents, Ferran Parés, Ulises Cortés, Jesús Labarta, Eduard Ayguadé

Finding tumour genetic markers is essential to biomedicine due to their relevance for cancer detection and therapy development. In this paper, we explore a recently released dataset of chromosome rearrangements in 2,586 cancer patients, where different sorts of alterations have been detected. Using a Random Forest classifier, we evaluate the relevance of several features (some directly available in the original data, some engineered by us) related to chromosome rearrangements. This evaluation results in a set of potential tumour genetic markers, some of which are validated in the bibliography, while others are potentially novel.

  
Access Paper or Ask Questions

Detector-SegMentor Network for Skin Lesion Localization and Segmentation

May 13, 2020
Shreshth Saini, Divij Gupta, Anil Kumar Tiwari

Melanoma is a life-threatening form of skin cancer when left undiagnosed at the early stages. Although there are more cases of non-melanoma cancer than melanoma cancer, melanoma cancer is more deadly. Early detection of melanoma is crucial for the timely diagnosis of melanoma cancer and prohibit its spread to distant body parts. Segmentation of skin lesion is a crucial step in the classification of melanoma cancer from the cancerous lesions in dermoscopic images. Manual segmentation of dermoscopic skin images is very time consuming and error-prone resulting in an urgent need for an intelligent and accurate algorithm. In this study, we propose a simple yet novel network-in-network convolution neural network(CNN) based approach for segmentation of the skin lesion. A Faster Region-based CNN (Faster RCNN) is used for preprocessing to predict bounding boxes of the lesions in the whole image which are subsequently cropped and fed into the segmentation network to obtain the lesion mask. The segmentation network is a combination of the UNet and Hourglass networks. We trained and evaluated our models on ISIC 2018 dataset and also cross-validated on PH\textsuperscript{2} and ISBI 2017 datasets. Our proposed method surpassed the state-of-the-art with Dice Similarity Coefficient of 0.915 and Accuracy 0.959 on ISIC 2018 dataset and Dice Similarity Coefficient of 0.947 and Accuracy 0.971 on ISBI 2017 dataset.

* 9 pages, 7 figures, accepted at NCVPRIPG 2019 
  
Access Paper or Ask Questions

EBHI:A New Enteroscope Biopsy Histopathological H&E Image Dataset for Image Classification Evaluation

Feb 17, 2022
Weiming Hu, Chen Li, Xiaoyan Li, Md Mamunur Rahaman, Yong Zhang, Haoyuan Chen, Wanli Liu, Yudong Yao, Hongzan Sun, Ning Xu, Xinyu Huang, Marcin Grzegorze

Background and purpose: Colorectal cancer has become the third most common cancer worldwide, accounting for approximately 10% of cancer patients. Early detection of the disease is important for the treatment of colorectal cancer patients. Histopathological examination is the gold standard for screening colorectal cancer. However, the current lack of histopathological image datasets of colorectal cancer, especially enteroscope biopsies, hinders the accurate evaluation of computer-aided diagnosis techniques. Methods: A new publicly available Enteroscope Biopsy Histopathological H&E Image Dataset (EBHI) is published in this paper. To demonstrate the effectiveness of the EBHI dataset, we have utilized several machine learning, convolutional neural networks and novel transformer-based classifiers for experimentation and evaluation, using an image with a magnification of 200x. Results: Experimental results show that the deep learning method performs well on the EBHI dataset. Traditional machine learning methods achieve maximum accuracy of 76.02% and deep learning method achieves a maximum accuracy of 95.37%. Conclusion: To the best of our knowledge, EBHI is the first publicly available colorectal histopathology enteroscope biopsy dataset with four magnifications and five types of images of tumor differentiation stages, totaling 5532 images. We believe that EBHI could attract researchers to explore new classification algorithms for the automated diagnosis of colorectal cancer, which could help physicians and patients in clinical settings.

  
Access Paper or Ask Questions

Prostate Gland Segmentation in Histology Images via Residual and Multi-Resolution U-Net

May 21, 2021
Julio Silva-Rodríguez, Elena Payá-Bosch, Gabriel García, Adrián Colomer, Valery Naranjo

Prostate cancer is one of the most prevalent cancers worldwide. One of the key factors in reducing its mortality is based on early detection. The computer-aided diagnosis systems for this task are based on the glandular structural analysis in histology images. Hence, accurate gland detection and segmentation is crucial for a successful prediction. The methodological basis of this work is a prostate gland segmentation based on U-Net convolutional neural network architectures modified with residual and multi-resolution blocks, trained using data augmentation techniques. The residual configuration outperforms in the test subset the previous state-of-the-art approaches in an image-level comparison, reaching an average Dice Index of 0.77.

  
Access Paper or Ask Questions

Segmentation of Breast Regions in Mammogram Based on Density: A Review

Sep 25, 2012
Nafiza Saidin, Harsa Amylia Mat Sakim, Umi Kalthum Ngah, Ibrahim Lutfi Shuaib

The focus of this paper is to review approaches for segmentation of breast regions in mammograms according to breast density. Studies based on density have been undertaken because of the relationship between breast cancer and density. Breast cancer usually occurs in the fibroglandular area of breast tissue, which appears bright on mammograms and is described as breast density. Most of the studies are focused on the classification methods for glandular tissue detection. Others highlighted on the segmentation methods for fibroglandular tissue, while few researchers performed segmentation of the breast anatomical regions based on density. There have also been works on the segmentation of other specific parts of breast regions such as either detection of nipple position, skin-air interface or pectoral muscles. The problems on the evaluation performance of the segmentation results in relation to ground truth are also discussed in this paper.

* IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 4, No 2, July 2012 
* 9 pages, 2 figures,IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 4, No 2, July 2012 
  
Access Paper or Ask Questions

A Comparative Study on Polyp Classification using Convolutional Neural Networks

Jul 12, 2020
Krushi Patel, Kaidong Li, Ke Tao, Quan Wang, Ajay Bansal, Amit Rastogi, Guanghui Wang

Colorectal cancer is the third most common cancer diagnosed in both men and women in the United States. Most colorectal cancers start as a growth on the inner lining of the colon or rectum, called 'polyp'. Not all polyps are cancerous, but some can develop into cancer. Early detection and recognition of the type of polyps is critical to prevent cancer and change outcomes. However, visual classification of polyps is challenging due to varying illumination conditions of endoscopy, variant texture, appearance, and overlapping morphology between polyps. More importantly, evaluation of polyp patterns by gastroenterologists is subjective leading to a poor agreement among observers. Deep convolutional neural networks have proven very successful in object classification across various object categories. In this work, we compare the performance of the state-of-the-art general object classification models for polyp classification. We trained a total of six CNN models end-to-end using a dataset of 157 video sequences composed of two types of polyps: hyperplastic and adenomatous. Our results demonstrate that the state-of-the-art CNN models can successfully classify polyps with an accuracy comparable or better than reported among gastroenterologists. The results of this study can guide future research in polyp classification.

  
Access Paper or Ask Questions

Gleason Grading of Histology Prostate Images through Semantic Segmentation via Residual U-Net

May 22, 2020
Amartya Kalapahar, Julio Silva-Rodríguez, Adrián Colomer, Fernando López-Mir, Valery Naranjo

Worldwide, prostate cancer is one of the main cancers affecting men. The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists. Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue via computer-vision algorithms in order to support the physicians' task. The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue according to the full Gleason system. This model outperforms other well-known architectures, and reaches a pixel-level Cohen's quadratic Kappa of 0.52, at the level of previous image-level works in the literature, but providing also a detailed localisation of the patterns.

  
Access Paper or Ask Questions

Learning Pain from Action Unit Combinations: A Weakly Supervised Approach via Multiple Instance Learning

Feb 20, 2018
Zhanli Chen, Rashid Ansari, Diana J. Wilkie

Patient pain can be detected highly reliably from facial expressions using a set of facial muscle-based action units (AUs) defined by the Facial Action Coding System (FACS). A key characteristic of facial expression of pain is the simultaneous occurrence of pain-related AU combinations, whose automated detection would be highly beneficial for efficient and practical pain monitoring. Existing general Automated Facial Expression Recognition (AFER) systems prove inadequate when applied specifically for detecting pain as they either focus on detecting individual pain-related AUs but not on combinations or they seek to bypass AU detection by training a binary pain classifier directly on pain intensity data but are limited by lack of enough labeled data for satisfactory training. In this paper, we propose a new approach that mimics the strategy of human coders of decoupling pain detection into two consecutive tasks: one performed at the individual video-frame level and the other at video-sequence level. Using state-of-the-art AFER tools to detect single AUs at the frame level, we propose two novel data structures to encode AU combinations from single AU scores. Two weakly supervised learning frameworks namely multiple instance learning (MIL) and multiple clustered instance learning (MCIL) are employed corresponding to each data structure to learn pain from video sequences. Experimental results show an 87% pain recognition accuracy with 0.94 AUC (Area Under Curve) on the UNBC-McMaster Shoulder Pain Expression dataset. Tests on long videos in a lung cancer patient video dataset demonstrates the potential value of the proposed system for pain monitoring in clinical settings.

  
Access Paper or Ask Questions

Mammograms Classification: A Review

Mar 04, 2022
Marawan Elbatel

An advanced reliable low-cost form of screening method, Digital mammography has been used as an effective imaging method for breast cancer detection. With an increased focus on technologies to aid healthcare, Mammogram images have been utilized in developing computer-aided diagnosis systems that will potentially help in clinical diagnosis. Researchers have proved that artificial intelligence with its emerging technologies can be used in the early detection of the disease and improve radiologists' performance in assessing breast cancer. In this paper, we review the methods developed for mammogram mass classification in two categories. The first one is classifying manually provided cropped region of interests (ROI) as either malignant or benign, and the second one is the classification of automatically segmented ROIs as either malignant or benign. We also provide an overview of datasets and evaluation metrics used in the classification task. Finally, we compare and discuss the deep learning approach to classical image processing and learning approach in this domain.

  
Access Paper or Ask Questions

Superpixel Based Segmentation and Classification of Polyps in Wireless Capsule Endoscopy

May 28, 2018
Omid Haji Maghsoudi

Wireless Capsule Endoscopy (WCE) is a relatively new technology to record the entire GI trace, in vivo. The large amounts of frames captured during an examination cause difficulties for physicians to review all these frames. The need for reducing the reviewing time using some intelligent methods has been a challenge. Polyps are considered as growing tissues on the surface of intestinal tract not inside of an organ. Most polyps are not cancerous, but if one becomes larger than a centimeter, it can turn into cancer by great chance. The WCE frames provide the early stage possibility for detection of polyps. Here, the application of simple linear iterative clustering (SLIC) superpixel for segmentation of polyps in WCE frames is evaluated. Different SLIC superpixel numbers are examined to find the highest sensitivity for detection of polyps. The SLIC superpixel segmentation is promising to improve the results of previous studies. Finally, the superpixels were classified using a support vector machine (SVM) by extracting some texture and color features. The classification results showed a sensitivity of 91%.

* This paper has been published in SPMB 2017 
  
Access Paper or Ask Questions
<<
30
31
32
33
34
35
36
37
38
39
40
41
42
>>