Alert button
Picture for Peter Steinbach

Peter Steinbach

Alert button

Uncertainty Estimation in Instance Segmentation with Star-convex Shapes

Sep 19, 2023
Qasim M. K. Siddiqui, Sebastian Starke, Peter Steinbach

Figure 1 for Uncertainty Estimation in Instance Segmentation with Star-convex Shapes
Figure 2 for Uncertainty Estimation in Instance Segmentation with Star-convex Shapes
Figure 3 for Uncertainty Estimation in Instance Segmentation with Star-convex Shapes
Figure 4 for Uncertainty Estimation in Instance Segmentation with Star-convex Shapes

Instance segmentation has witnessed promising advancements through deep neural network-based algorithms. However, these models often exhibit incorrect predictions with unwarranted confidence levels. Consequently, evaluating prediction uncertainty becomes critical for informed decision-making. Existing methods primarily focus on quantifying uncertainty in classification or regression tasks, lacking emphasis on instance segmentation. Our research addresses the challenge of estimating spatial certainty associated with the location of instances with star-convex shapes. Two distinct clustering approaches are evaluated which compute spatial and fractional certainty per instance employing samples by the Monte-Carlo Dropout or Deep Ensemble technique. Our study demonstrates that combining spatial and fractional certainty scores yields improved calibrated estimation over individual certainty scores. Notably, our experimental results show that the Deep Ensemble technique alongside our novel radial clustering approach proves to be an effective strategy. Our findings emphasize the significance of evaluating the calibration of estimated certainties for model reliability and decision-making.

Viaarxiv icon

Detecting Adversarial Examples in Batches -- a geometrical approach

Jun 17, 2022
Danush Kumar Venkatesh, Peter Steinbach

Figure 1 for Detecting Adversarial Examples in Batches -- a geometrical approach
Figure 2 for Detecting Adversarial Examples in Batches -- a geometrical approach
Figure 3 for Detecting Adversarial Examples in Batches -- a geometrical approach
Figure 4 for Detecting Adversarial Examples in Batches -- a geometrical approach

Many deep learning methods have successfully solved complex tasks in computer vision and speech recognition applications. Nonetheless, the robustness of these models has been found to be vulnerable to perturbed inputs or adversarial examples, which are imperceptible to the human eye, but lead the model to erroneous output decisions. In this study, we adapt and introduce two geometric metrics, density and coverage, and evaluate their use in detecting adversarial samples in batches of unseen data. We empirically study these metrics using MNIST and two real-world biomedical datasets from MedMNIST, subjected to two different adversarial attacks. Our experiments show promising results for both metrics to detect adversarial examples. We believe that his work can lay the ground for further study on these metrics' use in deployed machine learning systems to monitor for possible attacks by adversarial examples or related pathologies such as dataset shift.

* Submitted to AdvML workshop at ICML2022 
Viaarxiv icon

Recommendations on test datasets for evaluating AI solutions in pathology

Apr 21, 2022
André Homeyer, Christian Geißler, Lars Ole Schwen, Falk Zakrzewski, Theodore Evans, Klaus Strohmenger, Max Westphal, Roman David Bülow, Michaela Kargl, Aray Karjauv, Isidre Munné-Bertran, Carl Orge Retzlaff, Adrià Romero-López, Tomasz Sołtysiński, Markus Plass, Rita Carvalho, Peter Steinbach, Yu-Chia Lan, Nassim Bouteldja, David Haber, Mateo Rojas-Carulla, Alireza Vafaei Sadr, Matthias Kraft, Daniel Krüger, Rutger Fick, Tobias Lang, Peter Boor, Heimo Müller, Peter Hufnagl, Norman Zerbe

Figure 1 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 2 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 3 for Recommendations on test datasets for evaluating AI solutions in pathology
Figure 4 for Recommendations on test datasets for evaluating AI solutions in pathology

Artificial intelligence (AI) solutions that automatically extract information from digital histology images have shown great promise for improving pathological diagnosis. Prior to routine use, it is important to evaluate their predictive performance and obtain regulatory approval. This assessment requires appropriate test datasets. However, compiling such datasets is challenging and specific recommendations are missing. A committee of various stakeholders, including commercial AI developers, pathologists, and researchers, discussed key aspects and conducted extensive literature reviews on test datasets in pathology. Here, we summarize the results and derive general recommendations for the collection of test datasets. We address several questions: Which and how many images are needed? How to deal with low-prevalence subsets? How can potential bias be detected? How should datasets be reported? What are the regulatory requirements in different countries? The recommendations are intended to help AI developers demonstrate the utility of their products and to help regulatory agencies and end users verify reported performance measures. Further research is needed to formulate criteria for sufficiently representative test datasets so that AI solutions can operate with less user intervention and better support diagnostic workflows in the future.

Viaarxiv icon

Machine Learning State-of-the-Art with Uncertainties

Apr 14, 2022
Peter Steinbach, Felicita Gernhardt, Mahnoor Tanveer, Steve Schmerler, Sebastian Starke

Figure 1 for Machine Learning State-of-the-Art with Uncertainties
Figure 2 for Machine Learning State-of-the-Art with Uncertainties
Figure 3 for Machine Learning State-of-the-Art with Uncertainties
Figure 4 for Machine Learning State-of-the-Art with Uncertainties

With the availability of data, hardware, software ecosystem and relevant skill sets, the machine learning community is undergoing a rapid development with new architectures and approaches appearing at high frequency every year. In this article, we conduct an exemplary image classification study in order to demonstrate how confidence intervals around accuracy measurements can greatly enhance the communication of research results as well as impact the reviewing process. In addition, we explore the hallmarks and limitations of this approximation. We discuss the relevance of this approach reflecting on a spotlight publication of ICLR22. A reproducible workflow is made available as an open-source adjoint to this publication. Based on our discussion, we make suggestions for improving the authoring and reviewing process of machine learning articles.

* 9 pages, 6 figures. Accepted at the ICLR2022 workshop on ML Evaluation Standards. Code to reproduce results can be obtained from https://github.com/psteinb/sota_on_uncertainties.git 
Viaarxiv icon