Alert button
Picture for Ryan P. O'Shea

Ryan P. O'Shea

Alert button

Naval Air Warfare Center Aircraft Division Lakehurst

Monocular Simultaneous Localization and Mapping using Ground Textures

Mar 10, 2023
Kyle M. Hart, Brendan Englot, Ryan P. O'Shea, John D. Kelly, David Martinez

Figure 1 for Monocular Simultaneous Localization and Mapping using Ground Textures
Figure 2 for Monocular Simultaneous Localization and Mapping using Ground Textures
Figure 3 for Monocular Simultaneous Localization and Mapping using Ground Textures
Figure 4 for Monocular Simultaneous Localization and Mapping using Ground Textures

Recent work has shown impressive localization performance using only images of ground textures taken with a downward facing monocular camera. This provides a reliable navigation method that is robust to feature sparse environments and challenging lighting conditions. However, these localization methods require an existing map for comparison. Our work aims to relax the need for a map by introducing a full simultaneous localization and mapping (SLAM) system. By not requiring an existing map, setup times are minimized and the system is more robust to changing environments. This SLAM system uses a combination of several techniques to accomplish this. Image keypoints are identified and projected into the ground plane. These keypoints, visual bags of words, and several threshold parameters are then used to identify overlapping images and revisited areas. The system then uses robust M-estimators to estimate the transform between robot poses with overlapping images and revisited areas. These optimized estimates make up the map used for navigation. We show, through experimental data, that this system performs reliably on many ground textures, but not all.

* 7 pages, 9 figures. To appear at ICRA 2023, London, UK. Distribution Statement A: Approved for public release; distribution is unlimited, as submitted under NAVAIR Public Release Authorization 2022-0586. The views expressed here are those of the authors and do not reflect the official policy or position of the U.S. Navy, Department of Defense, or U.S. Government 
Viaarxiv icon

Automatic Generation of Machine Learning Synthetic Data Using ROS

Jun 08, 2021
Kyle M. Hart, Ari B. Goodman, Ryan P. O'Shea

Figure 1 for Automatic Generation of Machine Learning Synthetic Data Using ROS
Figure 2 for Automatic Generation of Machine Learning Synthetic Data Using ROS
Figure 3 for Automatic Generation of Machine Learning Synthetic Data Using ROS

Data labeling is a time intensive process. As such, many data scientists use various tools to aid in the data generation and labeling process. While these tools help automate labeling, many still require user interaction throughout the process. Additionally, most target only a few network frameworks. Any researchers exploring multiple frameworks must find additional tools orwrite conversion scripts. This paper presents an automated tool for generating synthetic data in arbitrary network formats. It uses Robot Operating System (ROS) and Gazebo, which are common tools in the robotics community. Through ROS paradigms, it allows extensive user customization of the simulation environment and data generation process. Additionally, a plugin-like framework allows the development of arbitrary data format writers without the need to change the main body of code. Using this tool, the authors were able to generate an arbitrarily large image dataset for three unique training formats using approximately 15 min of user setup time and a variable amount of hands-off run time, depending on the dataset size. The source code for this data generation tool is available at https://github.com/Navy-RISE-Lab/nn_data_collection

* DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. NAWCAD-LKE Release Number 2021-72. Published in HCI International 2021 by Springer. The final authenticated version is available online at https://doi.org/10.1007/978-3-030-77772-2_21 
Viaarxiv icon