Alert button
Picture for Jes Frellsen

Jes Frellsen

Alert button

Polygonizer: An auto-regressive building delineator

Apr 08, 2023
Maxim Khomiakov, Michael Riis Andersen, Jes Frellsen

Figure 1 for Polygonizer: An auto-regressive building delineator
Figure 2 for Polygonizer: An auto-regressive building delineator
Figure 3 for Polygonizer: An auto-regressive building delineator
Figure 4 for Polygonizer: An auto-regressive building delineator

In geospatial planning, it is often essential to represent objects in a vectorized format, as this format easily translates to downstream tasks such as web development, graphics, or design. While these problems are frequently addressed using semantic segmentation, which requires additional post-processing to vectorize objects in a non-trivial way, we present an Image-to-Sequence model that allows for direct shape inference and is ready for vector-based workflows out of the box. We demonstrate the model's performance in various ways, including perturbations to the image input that correspond to variations or artifacts commonly encountered in remote sensing applications. Our model outperforms prior works when using ground truth bounding boxes (one object per image), achieving the lowest maximum tangent angle error.

* ICLR 2023 Workshop on Machine Learning in Remote Sensing 
Viaarxiv icon

That Label's Got Style: Handling Label Style Bias for Uncertain Image Segmentation

Mar 28, 2023
Kilian Zepf, Eike Petersen, Jes Frellsen, Aasa Feragen

Figure 1 for That Label's Got Style: Handling Label Style Bias for Uncertain Image Segmentation
Figure 2 for That Label's Got Style: Handling Label Style Bias for Uncertain Image Segmentation
Figure 3 for That Label's Got Style: Handling Label Style Bias for Uncertain Image Segmentation
Figure 4 for That Label's Got Style: Handling Label Style Bias for Uncertain Image Segmentation

Segmentation uncertainty models predict a distribution over plausible segmentations for a given input, which they learn from the annotator variation in the training set. However, in practice these annotations can differ systematically in the way they are generated, for example through the use of different labeling tools. This results in datasets that contain both data variability and differing label styles. In this paper, we demonstrate that applying state-of-the-art segmentation uncertainty models on such datasets can lead to model bias caused by the different label styles. We present an updated modelling objective conditioning on labeling style for aleatoric uncertainty estimation, and modify two state-of-the-art-architectures for segmentation uncertainty accordingly. We show with extensive experiments that this method reduces label style bias, while improving segmentation performance, increasing the applicability of segmentation uncertainty models in the wild. We curate two datasets, with annotations in different label styles, which we will make publicly available along with our code upon publication.

Viaarxiv icon

Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty

Mar 23, 2023
Kilian Zepf, Selma Wanna, Marco Miani, Juston Moore, Jes Frellsen, Søren Hauberg, Aasa Feragen, Frederik Warburg

Figure 1 for Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty
Figure 2 for Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty
Figure 3 for Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty
Figure 4 for Laplacian Segmentation Networks: Improved Epistemic Uncertainty from Spatial Aleatoric Uncertainty

Out of distribution (OOD) medical images are frequently encountered, e.g. because of site- or scanner differences, or image corruption. OOD images come with a risk of incorrect image segmentation, potentially negatively affecting downstream diagnoses or treatment. To ensure robustness to such incorrect segmentations, we propose Laplacian Segmentation Networks (LSN) that jointly model epistemic (model) and aleatoric (data) uncertainty in image segmentation. We capture data uncertainty with a spatially correlated logit distribution. For model uncertainty, we propose the first Laplace approximation of the weight posterior that scales to large neural networks with skip connections that have high-dimensional outputs. Empirically, we demonstrate that modelling spatial pixel correlation allows the Laplacian Segmentation Network to successfully assign high epistemic uncertainty to out-of-distribution objects appearing within images.

Viaarxiv icon

Learning to Generate 3D Representations of Building Roofs Using Single-View Aerial Imagery

Mar 20, 2023
Maxim Khomiakov, Alejandro Valverde Mahou, Alba Reinders Sánchez, Jes Frellsen, Michael Riis Andersen

Figure 1 for Learning to Generate 3D Representations of Building Roofs Using Single-View Aerial Imagery
Figure 2 for Learning to Generate 3D Representations of Building Roofs Using Single-View Aerial Imagery
Figure 3 for Learning to Generate 3D Representations of Building Roofs Using Single-View Aerial Imagery
Figure 4 for Learning to Generate 3D Representations of Building Roofs Using Single-View Aerial Imagery

We present a novel pipeline for learning the conditional distribution of a building roof mesh given pixels from an aerial image, under the assumption that roof geometry follows a set of regular patterns. Unlike alternative methods that require multiple images of the same object, our approach enables estimating 3D roof meshes using only a single image for predictions. The approach employs the PolyGen, a deep generative transformer architecture for 3D meshes. We apply this model in a new domain and investigate the sensitivity of the image resolution. We propose a novel metric to evaluate the performance of the inferred meshes, and our results show that the model is robust even at lower resolutions, while qualitatively producing realistic representations for out-of-distribution samples.

* Copyright 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works 
Viaarxiv icon

Internal-Coordinate Density Modelling of Protein Structure: Covariance Matters

Feb 27, 2023
Marloes Arts, Jes Frellsen, Wouter Boomsma

Figure 1 for Internal-Coordinate Density Modelling of Protein Structure: Covariance Matters
Figure 2 for Internal-Coordinate Density Modelling of Protein Structure: Covariance Matters
Figure 3 for Internal-Coordinate Density Modelling of Protein Structure: Covariance Matters
Figure 4 for Internal-Coordinate Density Modelling of Protein Structure: Covariance Matters

After the recent ground-breaking advances in protein structure prediction, one of the remaining challenges in protein machine learning is to reliably predict distributions of structural states. Parametric models of small-scale fluctuations are difficult to fit due to complex covariance structures between degrees of freedom in the protein chain, often causing models to either violate local or global structural constraints. In this paper, we present a new strategy for modelling protein densities in internal coordinates, which uses constraints in 3D space to induce covariance structure between the internal degrees of freedom. We illustrate the potential of the procedure by constructing a variational autoencoder with full covariance output induced by the constraints implied by the conditional mean in 3D, and demonstrate that our approach makes it possible to scale density models of internal coordinates to full-size proteins.

* Pages: 8 main, 2 references, 3 appendix. Figures: 5 main, 2 appendix 
Viaarxiv icon

Explainability as statistical inference

Dec 06, 2022
Hugo Henri Joseph Senetaire, Damien Garreau, Jes Frellsen, Pierre-Alexandre Mattei

Figure 1 for Explainability as statistical inference
Figure 2 for Explainability as statistical inference
Figure 3 for Explainability as statistical inference
Figure 4 for Explainability as statistical inference

A wide variety of model explanation approaches have been proposed in recent years, all guided by very different rationales and heuristics. In this paper, we take a new route and cast interpretability as a statistical inference problem. We propose a general deep probabilistic model designed to produce interpretable predictions. The model parameters can be learned via maximum likelihood, and the method can be adapted to any predictor network architecture and any type of prediction problem. Our method is a case of amortized interpretability models, where a neural network is used as a selector to allow for fast interpretation at inference time. Several popular interpretability methods are shown to be particular cases of regularised maximum likelihood for our general model. We propose new datasets with ground truth selection which allow for the evaluation of the features importance map. Using these datasets, we show experimentally that using multiple imputation provides more reasonable interpretations.

* 10 pages, 22 figures, submitted at ICLR 2023 
Viaarxiv icon

SolarDK: A high-resolution urban solar panel image classification and localization dataset

Dec 02, 2022
Maxim Khomiakov, Julius Holbech Radzikowski, Carl Anton Schmidt, Mathias Bonde Sørensen, Mads Andersen, Michael Riis Andersen, Jes Frellsen

Figure 1 for SolarDK: A high-resolution urban solar panel image classification and localization dataset
Figure 2 for SolarDK: A high-resolution urban solar panel image classification and localization dataset
Figure 3 for SolarDK: A high-resolution urban solar panel image classification and localization dataset
Figure 4 for SolarDK: A high-resolution urban solar panel image classification and localization dataset

The body of research on classification of solar panel arrays from aerial imagery is increasing, yet there are still not many public benchmark datasets. This paper introduces two novel benchmark datasets for classifying and localizing solar panel arrays in Denmark: A human annotated dataset for classification and segmentation, as well as a classification dataset acquired using self-reported data from the Danish national building registry. We explore the performance of prior works on the new benchmark dataset, and present results after fine-tuning models using a similar approach as recent works. Furthermore, we train models of newer architectures and provide benchmark baselines to our datasets in several scenarios. We believe the release of these datasets may improve future research in both local and global geospatial domains for identifying and mapping of solar panel arrays from aerial imagery. The data is accessible at https://osf.io/aj539/.

* 7 pages, 2 figures, to access the dataset, see https://osf.io/aj539/ 
Viaarxiv icon

deep-significance - Easy and Meaningful Statistical Significance Testing in the Age of Neural Networks

Apr 14, 2022
Dennis Ulmer, Christian Hardmeier, Jes Frellsen

Figure 1 for deep-significance - Easy and Meaningful Statistical Significance Testing in the Age of Neural Networks
Figure 2 for deep-significance - Easy and Meaningful Statistical Significance Testing in the Age of Neural Networks
Figure 3 for deep-significance - Easy and Meaningful Statistical Significance Testing in the Age of Neural Networks
Figure 4 for deep-significance - Easy and Meaningful Statistical Significance Testing in the Age of Neural Networks

A lot of Machine Learning (ML) and Deep Learning (DL) research is of an empirical nature. Nevertheless, statistical significance testing (SST) is still not widely used. This endangers true progress, as seeming improvements over a baseline might be statistical flukes, leading follow-up research astray while wasting human and computational resources. Here, we provide an easy-to-use package containing different significance tests and utility functions specifically tailored towards research needs and usability.

Viaarxiv icon

Benchmarking Generative Latent Variable Models for Speech

Apr 05, 2022
Jakob D. Havtorn, Lasse Borgholt, Søren Hauberg, Jes Frellsen, Lars Maaløe

Figure 1 for Benchmarking Generative Latent Variable Models for Speech
Figure 2 for Benchmarking Generative Latent Variable Models for Speech
Figure 3 for Benchmarking Generative Latent Variable Models for Speech
Figure 4 for Benchmarking Generative Latent Variable Models for Speech

Stochastic latent variable models (LVMs) achieve state-of-the-art performance on natural image generation but are still inferior to deterministic models on speech. In this paper, we develop a speech benchmark of popular temporal LVMs and compare them against state-of-the-art deterministic models. We report the likelihood, which is a much used metric in the image domain, but rarely, or incomparably, reported for speech models. To assess the quality of the learned representations, we also compare their usefulness for phoneme recognition. Finally, we adapt the Clockwork VAE, a state-of-the-art temporal LVM for video generation, to the speech domain. Despite being autoregressive only in latent space, we find that the Clockwork VAE can outperform previous LVMs and reduce the gap to deterministic models by using a hierarchy of latent variables.

* Accepted at the 2022 ICLR workshop on Deep Generative Models for Highly Structured Data (https://deep-gen-struct.github.io) 
Viaarxiv icon

Model-agnostic out-of-distribution detection using combined statistical tests

Mar 02, 2022
Federico Bergamin, Pierre-Alexandre Mattei, Jakob D. Havtorn, Hugo Senetaire, Hugo Schmutz, Lars Maaløe, Søren Hauberg, Jes Frellsen

Figure 1 for Model-agnostic out-of-distribution detection using combined statistical tests
Figure 2 for Model-agnostic out-of-distribution detection using combined statistical tests
Figure 3 for Model-agnostic out-of-distribution detection using combined statistical tests
Figure 4 for Model-agnostic out-of-distribution detection using combined statistical tests

We present simple methods for out-of-distribution detection using a trained generative model. These techniques, based on classical statistical tests, are model-agnostic in the sense that they can be applied to any differentiable generative model. The idea is to combine a classical parametric test (Rao's score test) with the recently introduced typicality test. These two test statistics are both theoretically well-founded and exploit different sources of information based on the likelihood for the typicality test and its gradient for the score test. We show that combining them using Fisher's method overall leads to a more accurate out-of-distribution test. We also discuss the benefits of casting out-of-distribution detection as a statistical testing problem, noting in particular that false positive rate control can be valuable for practical out-of-distribution detection. Despite their simplicity and generality, these methods can be competitive with model-specific out-of-distribution detection algorithms without any assumptions on the out-distribution.

* Accepted at the 25th International Conference on Artificial Intelligence and Statistics (AISTATS), 2022 
Viaarxiv icon