Alert button
Picture for Jakob Gawlikowski

Jakob Gawlikowski

Alert button

The Unreasonable Effectiveness of Deep Evidential Regression

May 20, 2022
Nis Meinert, Jakob Gawlikowski, Alexander Lavin

Figure 1 for The Unreasonable Effectiveness of Deep Evidential Regression
Figure 2 for The Unreasonable Effectiveness of Deep Evidential Regression
Figure 3 for The Unreasonable Effectiveness of Deep Evidential Regression
Figure 4 for The Unreasonable Effectiveness of Deep Evidential Regression

There is a significant need for principled uncertainty reasoning in machine learning systems as they are increasingly deployed in safety-critical domains. A new approach with uncertainty-aware regression-based neural networks (NNs), based on learning evidential distributions for aleatoric and epistemic uncertainties, shows promise over traditional deterministic methods and typical Bayesian NNs, notably with the capabilities to disentangle aleatoric and epistemic uncertainties. Despite some empirical success of Deep Evidential Regression (DER), there are important gaps in the mathematical foundation that raise the question of why the proposed technique seemingly works. We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a heuristic rather than an exact uncertainty quantification. We go on to propose corrections and redefinitions of how aleatoric and epistemic uncertainties should be extracted from NNs.

* 14 pages, 25 figures 
Viaarxiv icon

A Survey of Uncertainty in Deep Neural Networks

Jul 07, 2021
Jakob Gawlikowski, Cedrique Rovile Njieutcheu Tassi, Mohsin Ali, Jongseok Lee, Matthias Humt, Jianxiang Feng, Anna Kruspe, Rudolph Triebel, Peter Jung, Ribana Roscher, Muhammad Shahzad, Wen Yang, Richard Bamler, Xiao Xiang Zhu

Figure 1 for A Survey of Uncertainty in Deep Neural Networks
Figure 2 for A Survey of Uncertainty in Deep Neural Networks
Figure 3 for A Survey of Uncertainty in Deep Neural Networks
Figure 4 for A Survey of Uncertainty in Deep Neural Networks

Due to their increasing spread, confidence in neural network predictions became more and more important. However, basic neural networks do not deliver certainty estimates or suffer from over or under confidence. Many researchers have been working on understanding and quantifying uncertainty in a neural network's prediction. As a result, different types and sources of uncertainty have been identified and a variety of approaches to measure and quantify uncertainty in neural networks have been proposed. This work gives a comprehensive overview of uncertainty estimation in neural networks, reviews recent advances in the field, highlights current challenges, and identifies potential research opportunities. It is intended to give anyone interested in uncertainty estimation in neural networks a broad overview and introduction, without presupposing prior knowledge in this field. A comprehensive introduction to the most crucial sources of uncertainty is given and their separation into reducible model uncertainty and not reducible data uncertainty is presented. The modeling of these uncertainties based on deterministic neural networks, Bayesian neural networks, ensemble of neural networks, and test-time data augmentation approaches is introduced and different branches of these fields as well as the latest developments are discussed. For a practical application, we discuss different measures of uncertainty, approaches for the calibration of neural networks and give an overview of existing baselines and implementations. Different examples from the wide spectrum of challenges in different fields give an idea of the needs and challenges regarding uncertainties in practical applications. Additionally, the practical limitations of current methods for mission- and safety-critical real world applications are discussed and an outlook on the next steps towards a broader usage of such methods is given.

Viaarxiv icon

Leveraging Evidential Deep Learning Uncertainties with Graph-based Clustering to Detect Anomalies

Jul 04, 2021
Sandeep Kumar Singh, Jaya Shradha Fowdur, Jakob Gawlikowski, Daniel Medina

Figure 1 for Leveraging Evidential Deep Learning Uncertainties with Graph-based Clustering to Detect Anomalies
Figure 2 for Leveraging Evidential Deep Learning Uncertainties with Graph-based Clustering to Detect Anomalies
Figure 3 for Leveraging Evidential Deep Learning Uncertainties with Graph-based Clustering to Detect Anomalies
Figure 4 for Leveraging Evidential Deep Learning Uncertainties with Graph-based Clustering to Detect Anomalies

Understanding and representing traffic patterns are key to detecting anomalies in the maritime domain. To this end, we propose a novel graph-based traffic representation and association scheme to cluster trajectories of vessels using automatic identification system (AIS) data. We utilize the (un)clustered data to train a recurrent neural network (RNN)-based evidential regression model, which can predict a vessel's trajectory at future timesteps with its corresponding prediction uncertainty. This paper proposes the usage of a deep learning (DL)-based uncertainty estimation in detecting maritime anomalies, such as unusual vessel maneuvering. Furthermore, we utilize the evidential deep learning classifiers to detect unusual turns of vessels and the loss of AIS signal using predicted class probabilities with associated uncertainties. Our experimental results suggest that using graph-based clustered data improves the ability of the DL models to learn the temporal-spatial correlation of data and associated uncertainties. Using different AIS datasets and experiments, we demonstrate that the estimated prediction uncertainty yields fundamental information for the detection of traffic anomalies in the maritime and, possibly in other domains.

* Under submission in a Journal 
Viaarxiv icon

Out-of-distribution detection in satellite image classification

Apr 09, 2021
Jakob Gawlikowski, Sudipan Saha, Anna Kruspe, Xiao Xiang Zhu

Figure 1 for Out-of-distribution detection in satellite image classification
Figure 2 for Out-of-distribution detection in satellite image classification
Figure 3 for Out-of-distribution detection in satellite image classification
Figure 4 for Out-of-distribution detection in satellite image classification

In satellite image analysis, distributional mismatch between the training and test data may arise due to several reasons, including unseen classes in the test data and differences in the geographic area. Deep learning based models may behave in unexpected manner when subjected to test data that has such distributional shifts from the training data, also called out-of-distribution (OOD) examples. Predictive uncertainly analysis is an emerging research topic which has not been explored much in context of satellite image analysis. Towards this, we adopt a Dirichlet Prior Network based model to quantify distributional uncertainty of deep learning models for remote sensing. The approach seeks to maximize the representation gap between the in-domain and OOD examples for a better identification of unknown examples at test time. Experimental results on three exemplary test scenarios show the efficacy of the model in satellite image analysis.

Viaarxiv icon