Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

"cancer detection": models, code, and papers

EBHI:A New Enteroscope Biopsy Histopathological H&E Image Dataset for Image Classification Evaluation

Feb 17, 2022
Weiming Hu, Chen Li, Xiaoyan Li, Md Mamunur Rahaman, Yong Zhang, Haoyuan Chen, Wanli Liu, Yudong Yao, Hongzan Sun, Ning Xu, Xinyu Huang, Marcin Grzegorze

Background and purpose: Colorectal cancer has become the third most common cancer worldwide, accounting for approximately 10% of cancer patients. Early detection of the disease is important for the treatment of colorectal cancer patients. Histopathological examination is the gold standard for screening colorectal cancer. However, the current lack of histopathological image datasets of colorectal cancer, especially enteroscope biopsies, hinders the accurate evaluation of computer-aided diagnosis techniques. Methods: A new publicly available Enteroscope Biopsy Histopathological H&E Image Dataset (EBHI) is published in this paper. To demonstrate the effectiveness of the EBHI dataset, we have utilized several machine learning, convolutional neural networks and novel transformer-based classifiers for experimentation and evaluation, using an image with a magnification of 200x. Results: Experimental results show that the deep learning method performs well on the EBHI dataset. Traditional machine learning methods achieve maximum accuracy of 76.02% and deep learning method achieves a maximum accuracy of 95.37%. Conclusion: To the best of our knowledge, EBHI is the first publicly available colorectal histopathology enteroscope biopsy dataset with four magnifications and five types of images of tumor differentiation stages, totaling 5532 images. We believe that EBHI could attract researchers to explore new classification algorithms for the automated diagnosis of colorectal cancer, which could help physicians and patients in clinical settings.

  

Prostate Gland Segmentation in Histology Images via Residual and Multi-Resolution U-Net

May 21, 2021
Julio Silva-Rodríguez, Elena Payá-Bosch, Gabriel García, Adrián Colomer, Valery Naranjo

Prostate cancer is one of the most prevalent cancers worldwide. One of the key factors in reducing its mortality is based on early detection. The computer-aided diagnosis systems for this task are based on the glandular structural analysis in histology images. Hence, accurate gland detection and segmentation is crucial for a successful prediction. The methodological basis of this work is a prostate gland segmentation based on U-Net convolutional neural network architectures modified with residual and multi-resolution blocks, trained using data augmentation techniques. The residual configuration outperforms in the test subset the previous state-of-the-art approaches in an image-level comparison, reaching an average Dice Index of 0.77.

  

Segmentation of Breast Regions in Mammogram Based on Density: A Review

Sep 25, 2012
Nafiza Saidin, Harsa Amylia Mat Sakim, Umi Kalthum Ngah, Ibrahim Lutfi Shuaib

The focus of this paper is to review approaches for segmentation of breast regions in mammograms according to breast density. Studies based on density have been undertaken because of the relationship between breast cancer and density. Breast cancer usually occurs in the fibroglandular area of breast tissue, which appears bright on mammograms and is described as breast density. Most of the studies are focused on the classification methods for glandular tissue detection. Others highlighted on the segmentation methods for fibroglandular tissue, while few researchers performed segmentation of the breast anatomical regions based on density. There have also been works on the segmentation of other specific parts of breast regions such as either detection of nipple position, skin-air interface or pectoral muscles. The problems on the evaluation performance of the segmentation results in relation to ground truth are also discussed in this paper.

* IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 4, No 2, July 2012 
* 9 pages, 2 figures,IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 4, No 2, July 2012 
  

A Comparative Study on Polyp Classification using Convolutional Neural Networks

Jul 12, 2020
Krushi Patel, Kaidong Li, Ke Tao, Quan Wang, Ajay Bansal, Amit Rastogi, Guanghui Wang

Colorectal cancer is the third most common cancer diagnosed in both men and women in the United States. Most colorectal cancers start as a growth on the inner lining of the colon or rectum, called 'polyp'. Not all polyps are cancerous, but some can develop into cancer. Early detection and recognition of the type of polyps is critical to prevent cancer and change outcomes. However, visual classification of polyps is challenging due to varying illumination conditions of endoscopy, variant texture, appearance, and overlapping morphology between polyps. More importantly, evaluation of polyp patterns by gastroenterologists is subjective leading to a poor agreement among observers. Deep convolutional neural networks have proven very successful in object classification across various object categories. In this work, we compare the performance of the state-of-the-art general object classification models for polyp classification. We trained a total of six CNN models end-to-end using a dataset of 157 video sequences composed of two types of polyps: hyperplastic and adenomatous. Our results demonstrate that the state-of-the-art CNN models can successfully classify polyps with an accuracy comparable or better than reported among gastroenterologists. The results of this study can guide future research in polyp classification.

  

Gleason Grading of Histology Prostate Images through Semantic Segmentation via Residual U-Net

May 22, 2020
Amartya Kalapahar, Julio Silva-Rodríguez, Adrián Colomer, Fernando López-Mir, Valery Naranjo

Worldwide, prostate cancer is one of the main cancers affecting men. The final diagnosis of prostate cancer is based on the visual detection of Gleason patterns in prostate biopsy by pathologists. Computer-aided-diagnosis systems allow to delineate and classify the cancerous patterns in the tissue via computer-vision algorithms in order to support the physicians' task. The methodological core of this work is a U-Net convolutional neural network for image segmentation modified with residual blocks able to segment cancerous tissue according to the full Gleason system. This model outperforms other well-known architectures, and reaches a pixel-level Cohen's quadratic Kappa of 0.52, at the level of previous image-level works in the literature, but providing also a detailed localisation of the patterns.

  

Learning Pain from Action Unit Combinations: A Weakly Supervised Approach via Multiple Instance Learning

Feb 20, 2018
Zhanli Chen, Rashid Ansari, Diana J. Wilkie

Patient pain can be detected highly reliably from facial expressions using a set of facial muscle-based action units (AUs) defined by the Facial Action Coding System (FACS). A key characteristic of facial expression of pain is the simultaneous occurrence of pain-related AU combinations, whose automated detection would be highly beneficial for efficient and practical pain monitoring. Existing general Automated Facial Expression Recognition (AFER) systems prove inadequate when applied specifically for detecting pain as they either focus on detecting individual pain-related AUs but not on combinations or they seek to bypass AU detection by training a binary pain classifier directly on pain intensity data but are limited by lack of enough labeled data for satisfactory training. In this paper, we propose a new approach that mimics the strategy of human coders of decoupling pain detection into two consecutive tasks: one performed at the individual video-frame level and the other at video-sequence level. Using state-of-the-art AFER tools to detect single AUs at the frame level, we propose two novel data structures to encode AU combinations from single AU scores. Two weakly supervised learning frameworks namely multiple instance learning (MIL) and multiple clustered instance learning (MCIL) are employed corresponding to each data structure to learn pain from video sequences. Experimental results show an 87% pain recognition accuracy with 0.94 AUC (Area Under Curve) on the UNBC-McMaster Shoulder Pain Expression dataset. Tests on long videos in a lung cancer patient video dataset demonstrates the potential value of the proposed system for pain monitoring in clinical settings.

  

Mammograms Classification: A Review

Mar 04, 2022
Marawan Elbatel

An advanced reliable low-cost form of screening method, Digital mammography has been used as an effective imaging method for breast cancer detection. With an increased focus on technologies to aid healthcare, Mammogram images have been utilized in developing computer-aided diagnosis systems that will potentially help in clinical diagnosis. Researchers have proved that artificial intelligence with its emerging technologies can be used in the early detection of the disease and improve radiologists' performance in assessing breast cancer. In this paper, we review the methods developed for mammogram mass classification in two categories. The first one is classifying manually provided cropped region of interests (ROI) as either malignant or benign, and the second one is the classification of automatically segmented ROIs as either malignant or benign. We also provide an overview of datasets and evaluation metrics used in the classification task. Finally, we compare and discuss the deep learning approach to classical image processing and learning approach in this domain.

  

Superpixel Based Segmentation and Classification of Polyps in Wireless Capsule Endoscopy

May 28, 2018
Omid Haji Maghsoudi

Wireless Capsule Endoscopy (WCE) is a relatively new technology to record the entire GI trace, in vivo. The large amounts of frames captured during an examination cause difficulties for physicians to review all these frames. The need for reducing the reviewing time using some intelligent methods has been a challenge. Polyps are considered as growing tissues on the surface of intestinal tract not inside of an organ. Most polyps are not cancerous, but if one becomes larger than a centimeter, it can turn into cancer by great chance. The WCE frames provide the early stage possibility for detection of polyps. Here, the application of simple linear iterative clustering (SLIC) superpixel for segmentation of polyps in WCE frames is evaluated. Different SLIC superpixel numbers are examined to find the highest sensitivity for detection of polyps. The SLIC superpixel segmentation is promising to improve the results of previous studies. Finally, the superpixels were classified using a support vector machine (SVM) by extracting some texture and color features. The classification results showed a sensitivity of 91%.

* This paper has been published in SPMB 2017 
  

Machine learning approach for segmenting glands in colon histology images using local intensity and texture features

May 15, 2019
Rupali Khatun, Soumick Chatterjee

Colon Cancer is one of the most common types of cancer. The treatment is planned to depend on the grade or stage of cancer. One of the preconditions for grading of colon cancer is to segment the glandular structures of tissues. Manual segmentation method is very time-consuming, and it leads to life risk for the patients. The principal objective of this project is to assist the pathologist to accurate detection of colon cancer. In this paper, the authors have proposed an algorithm for an automatic segmentation of glands in colon histology using local intensity and texture features. Here the dataset images are cropped into patches with different window sizes and taken the intensity of those patches, and also calculated texture-based features. Random forest classifier has been used to classify this patch into different labels. A multilevel random forest technique in a hierarchical way is proposed. This solution is fast, accurate and it is very much applicable in a clinical setup.

* 8th International Advance Computing Conference (IACC), 2018 
  

Towards a Complete Pipeline for Segmenting Nuclei in Feulgen-Stained Images

Feb 19, 2020
Luiz Antonio Buschetto Macarini, Aldo von Wangenheim, Felipe Perozzo Daltoé, Alexandre Sherlley Casimiro Onofre, Fabiana Botelho de Miranda Onofre, Marcelo Ricardo Stemmer

Cervical cancer is the second most common cancer type in women around the world. In some countries, due to non-existent or inadequate screening, it is often detected at late stages, making standard treatment options often absent or unaffordable. It is a deadly disease that could benefit from early detection approaches. It is usually done by cytological exams which consist of visually inspecting the nuclei searching for morphological alteration. Since it is done by humans, naturally, some subjectivity is introduced. Computational methods could be used to reduce this, where the first stage of the process would be the nuclei segmentation. In this context, we present a complete pipeline for the segmentation of nuclei in Feulgen-stained images using Convolutional Neural Networks. Here we show the entire process of segmentation, since the collection of the samples, passing through pre-processing, training the network, post-processing and results evaluation. We achieved an overall IoU of 0.78, showing the affordability of the approach of nuclei segmentation on Feulgen-stained images. The code is available in: https://github.com/luizbuschetto/feulgen_nuclei_segmentation.

* 7 pages, 8 figures (Figure 2 with "a" and "b"), conference paper accepted for presentation in XI Computer on the Beach (https://www.computeronthebeach.com.br/
  
<<
32
33
34
35
36
37
38
39
40
41
42
43
44
>>