Alert button
Picture for Nikolaos Papanikolopoulos

Nikolaos Papanikolopoulos

Alert button

The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT

Jul 05, 2023
Nicholas Heller, Fabian Isensee, Dasha Trofimova, Resha Tejpaul, Zhongchen Zhao, Huai Chen, Lisheng Wang, Alex Golts, Daniel Khapun, Daniel Shats, Yoel Shoshan, Flora Gilboa-Solomon, Yasmeen George, Xi Yang, Jianpeng Zhang, Jing Zhang, Yong Xia, Mengran Wu, Zhiyang Liu, Ed Walczak, Sean McSweeney, Ranveer Vasdev, Chris Hornung, Rafat Solaiman, Jamee Schoephoerster, Bailey Abernathy, David Wu, Safa Abdulkadir, Ben Byun, Justice Spriggs, Griffin Struyk, Alexandra Austin, Ben Simpson, Michael Hagstrom, Sierra Virnig, John French, Nitin Venkatesh, Sarah Chan, Keenan Moore, Anna Jacobsen, Susan Austin, Mark Austin, Subodh Regmi, Nikolaos Papanikolopoulos, Christopher Weight

Figure 1 for The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT
Figure 2 for The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT
Figure 3 for The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT
Figure 4 for The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT

This paper presents the challenge report for the 2021 Kidney and Kidney Tumor Segmentation Challenge (KiTS21) held in conjunction with the 2021 international conference on Medical Image Computing and Computer Assisted Interventions (MICCAI). KiTS21 is a sequel to its first edition in 2019, and it features a variety of innovations in how the challenge was designed, in addition to a larger dataset. A novel annotation method was used to collect three separate annotations for each region of interest, and these annotations were performed in a fully transparent setting using a web-based annotation tool. Further, the KiTS21 test set was collected from an outside institution, challenging participants to develop methods that generalize well to new populations. Nonetheless, the top-performing teams achieved a significant improvement over the state of the art set in 2019, and this performance is shown to inch ever closer to human-level performance. An in-depth meta-analysis is presented describing which methods were used and how they faired on the leaderboard, as well as the characteristics of which cases generally saw good performance, and which did not. Overall KiTS21 facilitated a significant advancement in the state of the art in kidney tumor segmentation, and provides useful insights that are applicable to the field of semantic segmentation as a whole.

* 34 pages, 12 figures 
Viaarxiv icon

Pre-Clustering Point Clouds of Crop Fields Using Scalable Methods

Jul 22, 2021
Henry J. Nelson, Nikolaos Papanikolopoulos

Figure 1 for Pre-Clustering Point Clouds of Crop Fields Using Scalable Methods
Figure 2 for Pre-Clustering Point Clouds of Crop Fields Using Scalable Methods
Figure 3 for Pre-Clustering Point Clouds of Crop Fields Using Scalable Methods
Figure 4 for Pre-Clustering Point Clouds of Crop Fields Using Scalable Methods

In order to apply the recent successes of automated plant phenotyping and machine learning on a large scale, efficient and general algorithms must be designed to intelligently split crop fields into small, yet actionable, portions that can then be processed by more complex algorithms. In this paper we notice a similarity between the current state-of-the-art for this problem and a commonly used density-based clustering algorithm, Quickshift. Exploiting this similarity we propose a number of novel, application specific algorithms with the goal of producing a general and scalable plant segmentation algorithm. The novel algorithms proposed in this work are shown to produce quantitatively better results than the current state-of-the-art while being less sensitive to input parameters and maintaining the same algorithmic time complexity. When incorporated into field-scale phenotyping systems, the proposed algorithms should work as a drop in replacement that can greatly improve the accuracy of results while ensuring that performance and scalability remain undiminished.

Viaarxiv icon

Learning Log-Determinant Divergences for Positive Definite Matrices

Apr 13, 2021
Anoop Cherian, Panagiotis Stanitsas, Jue Wang, Mehrtash Harandi, Vassilios Morellas, Nikolaos Papanikolopoulos

Figure 1 for Learning Log-Determinant Divergences for Positive Definite Matrices
Figure 2 for Learning Log-Determinant Divergences for Positive Definite Matrices
Figure 3 for Learning Log-Determinant Divergences for Positive Definite Matrices
Figure 4 for Learning Log-Determinant Divergences for Positive Definite Matrices

Representations in the form of Symmetric Positive Definite (SPD) matrices have been popularized in a variety of visual learning applications due to their demonstrated ability to capture rich second-order statistics of visual data. There exist several similarity measures for comparing SPD matrices with documented benefits. However, selecting an appropriate measure for a given problem remains a challenge and in most cases, is the result of a trial-and-error process. In this paper, we propose to learn similarity measures in a data-driven manner. To this end, we capitalize on the \alpha\beta-log-det divergence, which is a meta-divergence parametrized by scalars \alpha and \beta, subsuming a wide family of popular information divergences on SPD matrices for distinct and discrete values of these parameters. Our key idea is to cast these parameters in a continuum and learn them from data. We systematically extend this idea to learn vector-valued parameters, thereby increasing the expressiveness of the underlying non-linear measure. We conjoin the divergence learning problem with several standard tasks in machine learning, including supervised discriminative dictionary learning and unsupervised SPD matrix clustering. We present Riemannian gradient descent schemes for optimizing our formulations efficiently, and show the usefulness of our method on eight standard computer vision tasks.

* Accepted at Trans. PAMI (extended version of ICCV 2017 paper). arXiv admin note: substantial text overlap with arXiv:1708.01741 
Viaarxiv icon

The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge

Dec 02, 2019
Nicholas Heller, Fabian Isensee, Klaus H. Maier-Hein, Xiaoshuai Hou, Chunmei Xie, Fengyi Li, Yang Nan, Guangrui Mu, Zhiyong Lin, Miofei Han, Guang Yao, Yaozong Gao, Yao Zhang, Yixin Wang, Feng Hou, Jiawei Yang, Guangwei Xiong, Jiang Tian, Cheng Zhong, Jun Ma, Jack Rickman, Joshua Dean, Bethany Stai, Resha Tejpaul, Makinna Oestreich, Paul Blake, Heather Kaluzniak, Shaneabbas Raza, Joel Rosenberg, Keenan Moore, Edward Walczak, Zachary Rengel, Zach Edgerton, Ranveer Vasdev, Matthew Peterson, Sean McSweeney, Sarah Peterson, Arveen Kalapara, Niranjan Sathianathen, Christopher Weight, Nikolaos Papanikolopoulos

Figure 1 for The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge
Figure 2 for The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge
Figure 3 for The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge
Figure 4 for The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge

There is a large body of literature linking anatomic and geometric characteristics of kidney tumors to perioperative and oncologic outcomes. Semantic segmentation of these tumors and their host kidneys is a promising tool for quantitatively characterizing these lesions, but its adoption is limited due to the manual effort required to produce high-quality 3D segmentations of these structures. Recently, methods based on deep learning have shown excellent results in automatic 3D segmentation, but they require large datasets for training, and there remains little consensus on which methods perform best. The 2019 Kidney and Kidney Tumor Segmentation challenge (KiTS19) was a competition held in conjunction with the 2019 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) which sought to address these issues and stimulate progress on this automatic segmentation problem. A training set of 210 cross sectional CT images with kidney tumors was publicly released with corresponding semantic segmentation masks. 106 teams from five continents used this data to develop automated systems to predict the true segmentation masks on a test set of 90 CT images for which the corresponding ground truth segmentations were kept private. These predictions were scored and ranked according to their average So rensen-Dice coefficient between the kidney and tumor across all 90 cases. The winning team achieved a Dice of 0.974 for kidney and 0.851 for tumor, approaching the inter-annotator performance on kidney (0.983) but falling short on tumor (0.923). This challenge has now entered an "open leaderboard" phase where it serves as a challenging benchmark in 3D semantic segmentation.

* 18 pages, 7 figures 
Viaarxiv icon

Design and Experiments with a Robot-Driven Underwater Holographic Microscope for Low-Cost In Situ Particle Measurements

Nov 22, 2019
Kevin Mallery, Dario Canelon, Jiarong Hong, Nikolaos Papanikolopoulos

Figure 1 for Design and Experiments with a Robot-Driven Underwater Holographic Microscope for Low-Cost In Situ Particle Measurements
Figure 2 for Design and Experiments with a Robot-Driven Underwater Holographic Microscope for Low-Cost In Situ Particle Measurements
Figure 3 for Design and Experiments with a Robot-Driven Underwater Holographic Microscope for Low-Cost In Situ Particle Measurements
Figure 4 for Design and Experiments with a Robot-Driven Underwater Holographic Microscope for Low-Cost In Situ Particle Measurements

Microscopic analysis of micro particles in situ in diverse water environments is necessary for monitoring water quality and localizing contamination sources. Conventional sensors such as optical microscopes and fluorometers often require complex sample preparation, are restricted to small sample volumes, and are unable to simultaneously capture all pertinent details of a sample such as particle size, shape, concentration, and three dimensional motion. In this article we propose a novel and cost-effective robotic system for mobile microscopic analysis of particles in situ at various depths which are fully controlled by the robot system itself. A miniature underwater digital in-line holographic microscope (DIHM) performs high resolution imaging of microparticles (e.g., algae cells, plastic debris, sediments) while movement allows measurement of particle distributions covering a large area of water. The main contribution of this work is the creation of a low-cost, comprehensive, and small underwater robotic holographic microscope that can assist in a variety of tasks in environmental monitoring and overall assessment of water quality such as contaminant detection and localization. The resulting system provides some unique capabilities such as expanded and systematic coverage of large bodies of water at a low cost. Several challenges such as the trade-off between image quality and cost are addressed to satisfy the aforementioned goals.

* 7 pages, 8 figures 
Viaarxiv icon

The Role of Publicly Available Data in MICCAI Papers from 2014 to 2018

Aug 12, 2019
Nicholas Heller, Jack Rickman, Christopher Weight, Nikolaos Papanikolopoulos

Figure 1 for The Role of Publicly Available Data in MICCAI Papers from 2014 to 2018
Figure 2 for The Role of Publicly Available Data in MICCAI Papers from 2014 to 2018

Widely-used public benchmarks are of huge importance to computer vision and machine learning research, especially with the computational resources required to reproduce state of the art results quickly becoming untenable. In medical image computing, the wide variety of image modalities and problem formulations yields a huge task-space for benchmarks to cover, and thus the widespread adoption of standard benchmarks has been slow, and barriers to releasing medical data exacerbate this issue. In this paper, we examine the role that publicly available data has played in MICCAI papers from the past five years. We find that more than half of these papers are based on private data alone, although this proportion seems to be decreasing over time. Additionally, we observed that after controlling for open access publication and the release of code, papers based on public data were cited over 60% more per year than their private-data counterparts. Further, we found that more than 20% of papers using public data did not provide a citation to the dataset or associated manuscript, highlighting the "second-rate" status that data contributions often take compared to theoretical ones. We conclude by making recommendations for MICCAI policies which could help to better incentivise data sharing and move the field toward more efficient and reproducible science.

* 8 pages, 2 figures 
Viaarxiv icon

Fast Estimating Pedestrian Moving State Based on Single 2D Body Pose by Shallow Neural Network

Jul 11, 2019
Zixing Wang, Nikolaos Papanikolopoulos

Figure 1 for Fast Estimating Pedestrian Moving State Based on Single 2D Body Pose by Shallow Neural Network
Figure 2 for Fast Estimating Pedestrian Moving State Based on Single 2D Body Pose by Shallow Neural Network
Figure 3 for Fast Estimating Pedestrian Moving State Based on Single 2D Body Pose by Shallow Neural Network
Figure 4 for Fast Estimating Pedestrian Moving State Based on Single 2D Body Pose by Shallow Neural Network

Crossing or Not-Crossing (C/NC) problem is important to autonomous vehicles (AVs) to safely interact with pedestrians. However, this problem setup ignores pedestrians walking along the direction of vehicles' movement (LONG). To enhance AVs' awareness of pedestrians behavior, we make the first step towards extending C/NC to C/NC/LONG problem and recognize them based on single body pose. In contrast, previous C/NC state classification work depend on multiple poses or contextual information. Our proposed shallow neural network classifier is able to recognize these three states within a very short time. We test our it on JAAD dataset and report average 81.23% accuracy. In order to further improve the classifier's performance, we introduce a computational-efficient method, action momentum optimizer (AMO), which correct prediction based on crossing behavior pattern. And our experiment shows that classifier perform at most 11.39% better on continuous pose test with the help of it. Furthermore, this model can cooperate with different sensors and algorithms that provide 2D pedestrian body pose so that it is able to work across multiple light and weather conditions. In addition, we have created extended annotations of pose for JAAD dataset, which will be publicly released soon

* 10 pages 
Viaarxiv icon

The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes

Mar 31, 2019
Nicholas Heller, Niranjan Sathianathen, Arveen Kalapara, Edward Walczak, Keenan Moore, Heather Kaluzniak, Joel Rosenberg, Paul Blake, Zachary Rengel, Makinna Oestreich, Joshua Dean, Michael Tradewell, Aneri Shah, Resha Tejpaul, Zachary Edgerton, Matthew Peterson, Shaneabbas Raza, Subodh Regmi, Nikolaos Papanikolopoulos, Christopher Weight

Figure 1 for The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes
Figure 2 for The KiTS19 Challenge Data: 300 Kidney Tumor Cases with Clinical Context, CT Semantic Segmentations, and Surgical Outcomes

The morphometry of a kidney tumor revealed by contrast-enhanced Computed Tomography (CT) imaging is an important factor in clinical decision making surrounding the lesion's diagnosis and treatment. Quantitative study of the relationship between kidney tumor morphology and clinical outcomes is difficult due to data scarcity and the laborious nature of manually quantifying imaging predictors. Automatic semantic segmentation of kidneys and kidney tumors is a promising tool towards automatically quantifying a wide array of morphometric features, but no sizeable annotated dataset is currently available to train models for this task. We present the KiTS19 challenge dataset: A collection of multi-phase CT imaging, segmentation masks, and comprehensive clinical outcomes for 300 patients who underwent nephrectomy for kidney tumors at our center between 2010 and 2018. 210 (70%) of these patients were selected at random as the training set for the 2019 MICCAI KiTS Kidney Tumor Segmentation Challenge and have been released publicly. With the presence of clinical context and surgical outcomes, this data can serve not only for benchmarking semantic segmentation models, but also for developing and studying biomarkers which make use of the imaging and semantic segmentation masks.

* 13 pages, 2 figures 
Viaarxiv icon

Imperfect Segmentation Labels: How Much Do They Matter?

Sep 24, 2018
Nicholas Heller, Joshua Dean, Nikolaos Papanikolopoulos

Figure 1 for Imperfect Segmentation Labels: How Much Do They Matter?
Figure 2 for Imperfect Segmentation Labels: How Much Do They Matter?
Figure 3 for Imperfect Segmentation Labels: How Much Do They Matter?
Figure 4 for Imperfect Segmentation Labels: How Much Do They Matter?

Labeled datasets for semantic segmentation are imperfect, especially in medical imaging where borders are often subtle or ill-defined. Little work has been done to analyze the effect that label errors have on the performance of segmentation methodologies. Here we present a large-scale study of model performance in the presence of varying types and degrees of error in training data. We trained U-Net, SegNet, and FCN32 several times for liver segmentation with 10 different modes of ground-truth perturbation. Our results show that for each architecture, performance steadily declines with boundary-localized errors, however, U-Net was significantly more robust to jagged boundary errors than the other architectures. We also found that each architecture was very robust to non-boundary-localized errors, suggesting that boundary-localized errors are fundamentally different and more challenging problem than random label errors in a classification setting.

* 9 pages, 3 figures, Accepted at MICCAI LABELS 2018 
Viaarxiv icon