Alert button
Picture for Lars Hammarstrand

Lars Hammarstrand

Alert button

Improving Open-Set Semi-Supervised Learning with Self-Supervision

Jan 24, 2023
Erik Wallin, Lennart Svensson, Fredrik Kahl, Lars Hammarstrand

Figure 1 for Improving Open-Set Semi-Supervised Learning with Self-Supervision
Figure 2 for Improving Open-Set Semi-Supervised Learning with Self-Supervision
Figure 3 for Improving Open-Set Semi-Supervised Learning with Self-Supervision
Figure 4 for Improving Open-Set Semi-Supervised Learning with Self-Supervision

Open-set semi-supervised learning (OSSL) is a realistic setting of semi-supervised learning where the unlabeled training set contains classes that are not present in the labeled set. Many existing OSSL methods assume that these out-of-distribution data are harmful and put effort into excluding data from unknown classes from the training objective. In contrast, we propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision. Additionally, we utilize an energy-based score to accurately recognize data belonging to the known classes, making our method well-suited for handling uncurated data in deployment. We show through extensive experimental evaluations on several datasets that our method shows overall unmatched robustness and performance in terms of closed-set accuracy and open-set recognition compared with state-of-the-art for OSSL. Our code will be released upon publication.

* Preprint 
Viaarxiv icon

DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision

May 11, 2022
Erik Wallin, Lennart Svensson, Fredrik Kahl, Lars Hammarstrand

Figure 1 for DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision
Figure 2 for DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision
Figure 3 for DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision
Figure 4 for DoubleMatch: Improving Semi-Supervised Learning with Self-Supervision

Following the success of supervised learning, semi-supervised learning (SSL) is now becoming increasingly popular. SSL is a family of methods, which in addition to a labeled training set, also use a sizable collection of unlabeled data for fitting a model. Most of the recent successful SSL methods are based on pseudo-labeling approaches: letting confident model predictions act as training labels. While these methods have shown impressive results on many benchmark datasets, a drawback of this approach is that not all unlabeled data are used during training. We propose a new SSL algorithm, DoubleMatch, which combines the pseudo-labeling technique with a self-supervised loss, enabling the model to utilize all unlabeled data in the training process. We show that this method achieves state-of-the-art accuracies on multiple benchmark datasets while also reducing training times compared to existing SSL methods. Code is available at https://github.com/walline/doublematch.

* ICPR2022 
Viaarxiv icon

Extended Object Tracking Using Sets Of Trajectories with a PHD Filter

Sep 02, 2021
Jakob Sjudin, Martin Marcusson, Lennart Svensson, Lars Hammarstrand

Figure 1 for Extended Object Tracking Using Sets Of Trajectories with a PHD Filter
Figure 2 for Extended Object Tracking Using Sets Of Trajectories with a PHD Filter
Figure 3 for Extended Object Tracking Using Sets Of Trajectories with a PHD Filter
Figure 4 for Extended Object Tracking Using Sets Of Trajectories with a PHD Filter

PHD filtering is a common and effective multiple object tracking (MOT) algorithm used in scenarios where the number of objects and their states are unknown. In scenarios where each object can generate multiple measurements per scan, some PHD filters can estimate the extent of the objects as well as their kinematic properties. Most of these approaches are, however, not able to inherently estimate trajectories and rely on ad-hoc methods, such as different labeling schemes, to build trajectories from the state estimates. This paper presents a Gamma Gaussian inverse Wishart mixture PHD filter that can directly estimate sets of trajectories of extended targets by expanding previous research on tracking sets of trajectories for point source objects to handle extended objects. The new filter is compared to an existing extended PHD filter that uses a labeling scheme to build trajectories, and it is shown that the new filter can estimate object trajectories more reliably.

* 8 pages, 4 figures. Submitted to 24th International Conference on Information Fusion 
Viaarxiv icon

Back to the Feature: Learning Robust Camera Localization from Pixels to Pose

Apr 07, 2021
Paul-Edouard Sarlin, Ajaykumar Unagar, Måns Larsson, Hugo Germain, Carl Toft, Viktor Larsson, Marc Pollefeys, Vincent Lepetit, Lars Hammarstrand, Fredrik Kahl, Torsten Sattler

Figure 1 for Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
Figure 2 for Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
Figure 3 for Back to the Feature: Learning Robust Camera Localization from Pixels to Pose
Figure 4 for Back to the Feature: Learning Robust Camera Localization from Pixels to Pose

Camera pose estimation in known scenes is a 3D geometry task recently tackled by multiple learning algorithms. Many regress precise geometric quantities, like poses or 3D points, from an input image. This either fails to generalize to new viewpoints or ties the model parameters to a specific scene. In this paper, we go Back to the Feature: we argue that deep networks should focus on learning robust and invariant visual features, while the geometric estimation should be left to principled algorithms. We introduce PixLoc, a scene-agnostic neural network that estimates an accurate 6-DoF pose from an image and a 3D model. Our approach is based on the direct alignment of multiscale deep features, casting camera localization as metric learning. PixLoc learns strong data priors by end-to-end training from pixels to pose and exhibits exceptional generalization to new scenes by separating model parameters and scene geometry. The system can localize in large environments given coarse pose priors but also improve the accuracy of sparse feature matching by jointly refining keypoints and poses with little overhead. The code will be publicly available at https://github.com/cvg/pixloc.

* Accepted to CVPR 2021 
Viaarxiv icon

Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization

Aug 18, 2019
Måns Larsson, Erik Stenborg, Carl Toft, Lars Hammarstrand, Torsten Sattler, Fredrik Kahl

Figure 1 for Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization
Figure 2 for Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization
Figure 3 for Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization
Figure 4 for Fine-Grained Segmentation Networks: Self-Supervised Segmentation for Improved Long-Term Visual Localization

Long-term visual localization is the problem of estimating the camera pose of a given query image in a scene whose appearance changes over time. It is an important problem in practice, for example, encountered in autonomous driving. In order to gain robustness to such changes, long-term localization approaches often use segmantic segmentations as an invariant scene representation, as the semantic meaning of each scene part should not be affected by seasonal and other changes. However, these representations are typically not very discriminative due to the limited number of available classes. In this paper, we propose a new neural network, the Fine-Grained Segmentation Network (FGSN), that can be used to provide image segmentations with a larger number of labels and can be trained in a self-supervised fashion. In addition, we show how FGSNs can be trained to output consistent labels across seasonal changes. We demonstrate through extensive experiments that integrating the fine-grained segmentations produced by our FGSNs into existing localization algorithms leads to substantial improvements in localization performance.

* Accepted to ICCV 2019 
Viaarxiv icon

A Cross-Season Correspondence Dataset for Robust Semantic Segmentation

Mar 16, 2019
Måns Larsson, Erik Stenborg, Lars Hammarstrand, Torsten Sattler, Mark Pollefeys, Fredrik Kahl

Figure 1 for A Cross-Season Correspondence Dataset for Robust Semantic Segmentation
Figure 2 for A Cross-Season Correspondence Dataset for Robust Semantic Segmentation
Figure 3 for A Cross-Season Correspondence Dataset for Robust Semantic Segmentation
Figure 4 for A Cross-Season Correspondence Dataset for Robust Semantic Segmentation

In this paper, we present a method to utilize 2D-2D point matches between images taken during different image conditions to train a convolutional neural network for semantic segmentation. Enforcing label consistency across the matches makes the final segmentation algorithm robust to seasonal changes. We describe how these 2D-2D matches can be generated with little human interaction by geometrically matching points from 3D models built from images. Two cross-season correspondence datasets are created providing 2D-2D matches across seasonal changes as well as from day to night. The datasets are made publicly available to facilitate further research. We show that adding the correspondences as extra supervision during training improves the segmentation performance of the convolutional neural network, making it more robust to seasonal changes and weather conditions.

* In Proc. CVPR 2019 
Viaarxiv icon

Poisson Multi-Bernoulli Mapping Using Gibbs Sampling

Nov 07, 2018
Maryam Fatemi, Karl Granström, Lennart Svensson, Francisco J. R. Ruiz, Lars Hammarstrand

Figure 1 for Poisson Multi-Bernoulli Mapping Using Gibbs Sampling
Figure 2 for Poisson Multi-Bernoulli Mapping Using Gibbs Sampling
Figure 3 for Poisson Multi-Bernoulli Mapping Using Gibbs Sampling
Figure 4 for Poisson Multi-Bernoulli Mapping Using Gibbs Sampling

This paper addresses the mapping problem. Using a conjugate prior form, we derive the exact theoretical batch multi-object posterior density of the map given a set of measurements. The landmarks in the map are modeled as extended objects, and the measurements are described as a Poisson process, conditioned on the map. We use a Poisson process prior on the map and prove that the posterior distribution is a hybrid Poisson, multi-Bernoulli mixture distribution. We devise a Gibbs sampling algorithm to sample from the batch multi-object posterior. The proposed method can handle uncertainties in the data associations and the cardinality of the set of landmarks, and is parallelizable, making it suitable for large-scale problems. The performance of the proposed method is evaluated on synthetic data and is shown to outperform a state-of-the-art method.

* IEEE Transactions on Signal Processing, Vol. 65, Issue 11, June 2017  
* 14 pages, 6 figures 
Viaarxiv icon

Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions

Apr 04, 2018
Torsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, Fredrik Kahl, Tomas Pajdla

Figure 1 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Figure 2 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Figure 3 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Figure 4 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions

Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera pose estimates. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on 6DOF camera pose estimation accuracy through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions, showing that long-term localization is far from solved, and propose promising avenues for future work, including sequence-based localization approaches and the need for better local features. Our benchmark is available at visuallocalization.net.

* Accepted to CVPR 2018 as a spotlight 
Viaarxiv icon

Long-term Visual Localization using Semantically Segmented Images

Mar 02, 2018
Erik Stenborg, Carl Toft, Lars Hammarstrand

Figure 1 for Long-term Visual Localization using Semantically Segmented Images
Figure 2 for Long-term Visual Localization using Semantically Segmented Images
Figure 3 for Long-term Visual Localization using Semantically Segmented Images
Figure 4 for Long-term Visual Localization using Semantically Segmented Images

Robust cross-seasonal localization is one of the major challenges in long-term visual navigation of autonomous vehicles. In this paper, we exploit recent advances in semantic segmentation of images, i.e., where each pixel is assigned a label related to the type of object it represents, to attack the problem of long-term visual localization. We show that semantically labeled 3-D point maps of the environment, together with semantically segmented images, can be efficiently used for vehicle localization without the need for detailed feature descriptors (SIFT, SURF, etc.). Thus, instead of depending on hand-crafted feature descriptors, we rely on the training of an image segmenter. The resulting map takes up much less storage space compared to a traditional descriptor based map. A particle filter based semantic localization solution is compared to one based on SIFT-features, and even with large seasonal variations over the year we perform on par with the larger and more descriptive SIFT-features, and are able to localize with an error below 1 m most of the time.

Viaarxiv icon