An inertial navigation system (INS) utilizes three orthogonal accelerometers and gyroscopes to determine platform position, velocity, and orientation. There are countless applications for INS, including robotics, autonomous platforms, and the internet of things. Recent research explores the integration of data-driven methods with INS, highlighting significant innovations, improving accuracy and efficiency. Despite the growing interest in this field and the availability of INS datasets, no datasets are available for gyro-free INS (GFINS) and multiple inertial measurement unit (MIMU) architectures. To fill this gap and to stimulate further research in this field, we designed and recorded GFINS and MIMU datasets using 54 inertial sensors grouped in nine inertial measurement units. These sensors can be used to define and evaluate different types of MIMU and GFINS architectures. The inertial sensors were arranged in three different sensor configurations and mounted on a mobile robot and a passenger car. In total, the dataset contains 35 hours of inertial data and corresponding ground truth trajectories. The data and code are freely accessible through our GitHub repository.
Autonomous underwater vehicles are specialized platforms engineered for deep underwater operations. Critical to their functionality is autonomous navigation, typically relying on an inertial navigation system and a Doppler velocity log. In real-world scenarios, incomplete Doppler velocity log measurements occur, resulting in positioning errors and mission aborts. To cope with such situations, a model and learning approaches were derived. This paper presents a comparative analysis of two cutting-edge deep learning methodologies, namely LiBeamsNet and MissBeamNet, alongside a model-based average estimator. These approaches are evaluated for their efficacy in regressing missing Doppler velocity log beams when two beams are unavailable. In our study, we used data recorded by a DVL mounted on an autonomous underwater vehicle operated in the Mediterranean Sea. We found that both deep learning architectures outperformed model-based approaches by over 16% in velocity prediction accuracy.
Autonomous underwater vehicles (AUVs) are used in a wide range of underwater applications, ranging from seafloor mapping to industrial operations. While underwater, the AUV navigation solution commonly relies on the fusion between inertial sensors and Doppler velocity logs (DVL). To achieve accurate DVL measurements a calibration procedure should be conducted before the mission begins. Model-based calibration approaches include filtering approaches utilizing global navigation satellite system signals. In this paper, we propose an end-to-end deep-learning framework for the calibration procedure. Using stimulative data, we show that our proposed approach outperforms model-based approaches by 35% in accuracy and 80% in the required calibration time.
The extended Kalman filter (EKF) is a widely adopted method for sensor fusion in navigation applications. A crucial aspect of the EKF is the online determination of the process noise covariance matrix reflecting the model uncertainty. While common EKF implementation assumes a constant process noise, in real-world scenarios, the process noise varies, leading to inaccuracies in the estimated state and potentially causing the filter to diverge. To cope with such situations, model-based adaptive EKF methods were proposed and demonstrated performance improvements, highlighting the need for a robust adaptive approach. In this paper, we derive and introduce A-KIT, an adaptive Kalman-informed transformer to learn the varying process noise covariance online. The A-KIT framework is applicable to any type of sensor fusion. Here, we present our approach to nonlinear sensor fusion based on an inertial navigation system and Doppler velocity log. By employing real recorded data from an autonomous underwater vehicle, we show that A-KIT outperforms the conventional EKF by more than 49.5% and model-based adaptive EKF by an average of 35.4% in terms of position accuracy.
Inertial navigation systems (INS) are widely used in both manned and autonomous platforms. One of the most critical tasks prior to their operation is to accurately determine their initial alignment while stationary, as it forms the cornerstone for the entire INS operational trajectory. While low-performance accelerometers can easily determine roll and pitch angles (leveling), establishing the heading angle (gyrocompassing) with low-performance gyros proves to be a challenging task without additional sensors. This arises from the limited signal strength of Earth's rotation rate, often overridden by gyro noise itself. To circumvent this deficiency, in this study we present a practical deep learning framework to effectively compensate for the inherent errors in low-performance gyroscopes. The resulting capability enables gyrocompassing, thereby eliminating the need for subsequent prolonged filtering phase (fine alignment). Through the development of theory and experimental validation, we demonstrate that the improved initial conditions establish a new lower error bound, bringing affordable gyros one step closer to being utilized in high-end tactical tasks.
Quadrotors are widely used for surveillance, mapping, and deliveries. In several scenarios the quadrotor operates in pure inertial navigation mode resulting in a navigation solution drift. To handle such situations and bind the navigation drift, the quadrotor dead reckoning (QDR) approach requires flying the quadrotor in a periodic trajectory. Then, using model or learning based approaches the quadrotor position vector can be estimated. We propose to use multiple inertial measurement units (MIMU) to improve the positioning accuracy of the QDR approach. Several methods to utilize MIMU data in a deep learning framework are derived and evaluated. Field experiments were conducted to validate the proposed approach and show its benefits.
Quadrotors are widely used for surveillance, mapping, and deliveries. In several scenarios the quadrotor operates in pure inertial navigation mode resulting in a navigation solution drift. To handle such situations and bind the navigation drift, the quadrotor dead reckoning (QDR) approach requires flying the quadrotor in a periodic trajectory. Then, using model or learning based approaches the quadrotor position vector can be estimated. We propose to use multiple inertial measurement units (MIMU) to improve the positioning accuracy of the QDR approach. Several methods to utilize MIMU data in a deep learning framework are derived and evaluated. Field experiments were conducted to validate the proposed approach and show its benefits.
By utilizing global navigation satellite system (GNSS) position and velocity measurements, the fusion between the GNSS and the inertial navigation system provides accurate and robust navigation information. When considering land vehicles,like autonomous ground vehicles,off-road vehicles or mobile robots,a GNSS-based heading angle measurement can be obtained and used in parallel to the position measurement to bound the heading angle drift. Yet, at low vehicle speeds (less than 2m/s) such a model-based heading measurement fails to provide satisfactory performance. This paper proposes GHNet, a deep-learning framework capable of accurately regressing the heading angle for vehicles operating at low speeds. We demonstrate that GHNet outperforms the current model-based approach for simulation and experimental datasets.
State estimation of dynamical systems from noisy observations is a fundamental task in many applications. It is commonly addressed using the linear Kalman filter (KF), whose performance can significantly degrade in the presence of outliers in the observations, due to the sensitivity of its convex quadratic objective function. To mitigate such behavior, outlier detection algorithms can be applied. In this work, we propose a parameter-free algorithm which mitigates the harmful effect of outliers while requiring only a short iterative process of the standard update step of the KF. To that end, we model each potential outlier as a normal process with unknown variance and apply online estimation through either expectation maximization or alternating maximization algorithms. Simulations and field experiment evaluations demonstrate competitive performance of our method, showcasing its robustness to outliers in filtering scenarios compared to alternative algorithms.
Autonomous inspection tasks necessitate effective path-planning mechanisms to efficiently gather observations from points of interest (POI). However, localization errors commonly encountered in urban environments can introduce execution uncertainty, posing challenges to the successful completion of such tasks. To tackle these challenges, we present IRIS-under uncertainty (IRIS-U^2), an extension of the incremental random inspection-roadmap search (IRIS) algorithm, that addresses the offline planning problem via an A*-based approach, where the planning process occurs prior the online execution. The key insight behind IRIS-U^2 is transforming the computed localization uncertainty, obtained through Monte Carlo (MC) sampling, into a POI probability. IRIS-U^2 offers insights into the expected performance of the execution task by providing confidence intervals (CI) for the expected coverage, expected path length, and collision probability, which becomes progressively tighter as the number of MC samples increase. The efficacy of IRIS-U^2 is demonstrated through a case study focusing on structural inspections of bridges. Our approach exhibits improved expected coverage, reduced collision probability, and yields increasingly-precise CIs as the number of MC samples grows. Furthermore, we emphasize the potential advantages of computing bounded sub-optimal solutions to reduce computation time while still maintaining the same CI boundaries.