Abstract:Basis Function (BF) expansions are a cornerstone of any engineer's toolbox for computational function approximation which shares connections with both neural networks and Gaussian processes. Even though BF expansions are an intuitive and straightforward model to use, they suffer from quadratic computational complexity in the number of BFs if the predictive variance is to be computed. We develop a method to automatically select the most important BFs for prediction in a sub-domain of the model domain. This significantly reduces the computational complexity of computing predictions while maintaining predictive accuracy. The proposed method is demonstrated using two numerical examples, where reductions up to 50-75% are possible without significantly reducing the predictive accuracy.
Abstract:A method to construct an observability-constrained magnetic-field-aided inertial navigation system is proposed. The proposed method builds upon the previously proposed observability-constrained extended Kalman filter and extends it to work with a magnetic-field-based odometry-aided inertial navigation system. The proposed method is evaluated using simulation and real-world data, showing that (i) the system observability properties are preserved, (ii) the estimation accuracy increases, and (iii) the perceived uncertainty calculated by the EKF is more consistent with the true uncertainty of the filter estimates.
Abstract:A Magnetic field Aided Inertial Navigation System (MAINS) for indoor navigation is proposed in this paper. MAINS leverages an array of magnetometers to measure spatial variations in the magnetic field, which are then used to estimate the displacement and orientation changes of the system, thereby aiding the inertial navigation system (INS). Experiments show that MAINS significantly outperforms the stand-alone INS, demonstrating a remarkable two orders of magnitude reduction in position error. Furthermore, when compared to the state-of-the-art magnetic-field-aided navigation approach, the proposed method exhibits slightly improved horizontal position accuracy. On the other hand, it has noticeably larger vertical error on datasets with large magnetic field variations. However, one of the main advantages of MAINS compared to the state-of-the-art is that it enables flexible sensor configurations. The experimental results show that the position error after 2 minutes of navigation in most cases is less than 3 meters when using an array of 30 magnetometers. Thus, the proposed navigation solution has the potential to solve one of the key challenges faced with current magnetic-field simultaneous localization and mapping (SLAM) solutions: the very limited allowable length of the exploration phase during which unvisited areas are mapped.
Abstract:This paper considers the problem of sequential fusion of predictions from neural networks (NN) and fusion of predictions from multiple NN. This fusion strategy increases the robustness, i.e., reduces the impact of one incorrect classification and detection of outliers the \nn has not seen during training. This paper uses Laplacian approximation of Bayesian NNs (BNNs) to quantify the uncertainty necessary for fusion. Here, an extension is proposed such that the prediction of the NN can be represented by multimodal distributions. Regarding calibration of the estimated uncertainty in the prediction, the performance is significantly improved by having the flexibility to represent a multimodal distribution. Two class classical image classification tasks, i.e., MNIST and CFAR10, and image sequences from camera traps of carnivores in Swedish forests have been used to demonstrate the fusion strategies and proposed extension to the Laplacian approximation.
Abstract:This paper considers the problem of detecting and tracking objects in a sequence of images. The problem is formulated in a filtering framework, using the output of object-detection algorithms as measurements. An extension to the filtering formulation is proposed that incorporates class information from the previous frame to robustify the classification, even if the object-detection algorithm outputs an incorrect prediction. Further, the properties of the object-detection algorithm are exploited to quantify the uncertainty of the bounding box detection in each frame. The complete filtering method is evaluated on camera trap images of the four large Swedish carnivores, bear, lynx, wolf, and wolverine. The experiments show that the class tracking formulation leads to a more robust classification.
Abstract:This letter investigates relationships between iterated filtering algorithms based on statistical linearization, such as the iterated unscented Kalman filter (IUKF), and filtering algorithms based on quasi-Newton (QN) methods, such as the QN iterated extended Kalman filter (QN-IEKF). Firstly, it is shown that the IUKF and the iterated posterior linearization filter (IPLF) can be viewed as QN algorithms, by finding a Hessian correction in the QN-IEKF such that the IPLF iterate updates are identical to that of the QN-IEKF. Secondly, it is shown that the IPLF/IUKF update can be rewritten such that it is approximately identical to the QN-IEKF, albeit for an additional correction term. This enables a richer understanding of the properties of iterated filtering algorithms based on statistical linearization.
Abstract:This letter shows that the following three classes of recursive state estimation filters: standard filters, such as the extended Kalman filter; iterated filters, such as the iterated unscented Kalman filter; and dynamically iterated filters, such as the dynamically iterated posterior linearization filters; can be unified in terms of a general algorithm. The general algorithm highlights the strong similarities between specific filtering algorithms in the three filter classes and facilitates an in-depth understanding of the pros and cons of the different filter classes and algorithms. We end with a numerical example showing the estimation accuracy differences between the three classes of filters when applied to a nonlinear localization problem.
Abstract:A framework for tightly integrated motion mode classification and state estimation in motion-constrained inertial navigation systems is presented. The framework uses a jump Markov model to describe the navigation system's motion mode and navigation state dynamics with a single model. A bank of Kalman filters is then used for joint inference of the navigation state and the motion mode. A method for learning unknown parameters in the jump Markov model, such as the motion mode transition probabilities, is also presented. The application of the proposed framework is illustrated via two examples. The first example is a foot-mounted navigation system that adapts its behavior to different gait speeds. The second example is a foot-mounted navigation system that detects when the user walks on flat ground and locks the vertical position estimate accordingly. Both examples show that the proposed framework provides significantly better position accuracy than a standard zero-velocity aided inertial navigation system. More importantly, the examples show that the proposed framework provides a theoretically well-grounded approach for developing new motion-constrained inertial navigation systems that can learn different motion patterns.
Abstract:Classifiers based on neural networks (NN) often lack a measure of uncertainty in the predicted class. We propose a method to estimate the probability mass function (PMF) of the different classes, as well as the covariance of the estimated PMF. First, a local linear approach is used during the training phase to recursively compute the covariance of the parameters in the NN. Secondly, in the classification phase another local linear approach is used to propagate the covariance of the learned NN parameters to the uncertainty in the output of the last layer of the NN. This allows for an efficient Monte Carlo (MC) approach for: (i) estimating the PMF; (ii) calculating the covariance of the estimated PMF; and (iii) proper risk assessment and fusion of multiple classifiers. Two classical image classification tasks, i.e., MNIST, and CFAR10, are used to demonstrate the efficiency the proposed method.
Abstract:A new class of iterated linearization-based nonlinear filters, dubbed dynamically iterated filters, is presented. Contrary to regular iterated filters such as the iterated extended Kalman filter (IEKF), iterated unscented Kalman filter (IUKF) and iterated posterior linearization filter (IPLF), dynamically iterated filters also take nonlinearities in the transition model into account. The general filtering algorithm is shown to essentially be a (locally over one time step) iterated Rauch-Tung-Striebel smoother. Three distinct versions of the dynamically iterated filters are especially investigated: analogues to the IEKF, IUKF and IPLF. The developed algorithms are evaluated on 25 different noise configurations of a tracking problem with a nonlinear transition model and linear measurement model, a scenario where conventional iterated filters are not useful. Even in this "simple" scenario, the dynamically iterated filters are shown to have superior root mean-squared error performance as compared to their respective baselines, the EKF and UKF. Particularly, even though the EKF diverges in 22 out of 25 configurations, the dynamically iterated EKF remains stable in 20 out of 25 scenarios, only diverging under high noise.