Alert button
Picture for Simon Watson

Simon Watson

Alert button

Millimeter-Wave Sensing for Avoidance of High-Risk Ground Conditions for Mobile Robots

Mar 30, 2022
Jamie Blanche, Shivoh Chirayil Nandakumar, Daniel Mitchell, Sam Harper, Keir Groves, Andrew West, Barry Lennox, Simon Watson, David Flynn, Ikuo Yamamoto

Figure 1 for Millimeter-Wave Sensing for Avoidance of High-Risk Ground Conditions for Mobile Robots
Figure 2 for Millimeter-Wave Sensing for Avoidance of High-Risk Ground Conditions for Mobile Robots
Figure 3 for Millimeter-Wave Sensing for Avoidance of High-Risk Ground Conditions for Mobile Robots
Figure 4 for Millimeter-Wave Sensing for Avoidance of High-Risk Ground Conditions for Mobile Robots

Mobile robot autonomy has made significant advances in recent years, with navigation algorithms well developed and used commercially in certain well-defined environments, such as warehouses. The common link in usage scenarios is that the environments in which the robots are utilized have a high degree of certainty. Operating environments are often designed to be robot friendly, for example augmented reality markers are strategically placed and the ground is typically smooth, level, and clear of debris. For robots to be useful in a wider range of environments, especially environments that are not sanitized for their use, robots must be able to handle uncertainty. This requires a robot to incorporate new sensors and sources of information, and to be able to use this information to make decisions regarding navigation and the overall mission. When using autonomous mobile robots in unstructured and poorly defined environments, such as a natural disaster site or in a rural environment, ground condition is of critical importance and is a common cause of failure. Examples include loss of traction due to high levels of ground water, hidden cavities, or material boundary failures. To evaluate a non-contact sensing method to mitigate these risks, Frequency Modulated Continuous Wave (FMCW) radar is integrated with an Unmanned Ground Vehicle (UGV), representing a novel application of FMCW to detect new measurands for Robotic Autonomous Systems (RAS) navigation, informing on terrain integrity and adding to the state-of-the-art in sensing for optimized autonomous path planning. In this paper, the FMCW is first evaluated in a desktop setting to determine its performance in anticipated ground conditions. The FMCW is then fixed to a UGV and the sensor system is tested and validated in a representative environment containing regions with significant levels of ground water saturation.

* 6 pages, 9 figures 
Viaarxiv icon

MIRRAX: A Reconfigurable Robot for Limited Access Environments

Mar 01, 2022
Wei Cheah, Keir Groves, Horatio Martin, Harriet Peel, Simon Watson, Ognjen Marjanovic, Barry Lennox

Figure 1 for MIRRAX: A Reconfigurable Robot for Limited Access Environments
Figure 2 for MIRRAX: A Reconfigurable Robot for Limited Access Environments
Figure 3 for MIRRAX: A Reconfigurable Robot for Limited Access Environments
Figure 4 for MIRRAX: A Reconfigurable Robot for Limited Access Environments

The development of mobile robot platforms for inspection has gained traction in recent years with the rapid advancement in hardware and software. However, conventional mobile robots are unable to address the challenge of operating in extreme environments where the robot is required to traverse narrow gaps in highly cluttered areas with restricted access. This paper presents MIRRAX, a robot that has been designed to meet these challenges with the capability of re-configuring itself to both access restricted environments through narrow ports and navigate through tightly spaced obstacles. Controllers for the robot are detailed, along with an analysis on the controllability of the robot given the use of Mecanum wheels in a variable configuration. Characterisation on the robot's performance identified suitable configurations for operating in narrow environments. The minimum lateral footprint width achievable for stable configuration ($<2^\text{o}$~roll) was 0.19~m. Experimental validation of the robot's controllability shows good agreement with the theoretical analysis. A further series of experiments shows the feasibility of the robot in addressing the challenges above: the capability to reconfigure itself for restricted entry through ports as small as 150mm diameter, and navigating through cluttered environments. The paper also presents results from a deployment in a Magnox facility at the Sellafield nuclear site in the UK -- the first robot to ever do so, for remote inspection and mapping.

* 10 pages, Under review for IEEE Transactions on Robotics 
Viaarxiv icon

A Review: Challenges and Opportunities for Artificial Intelligence and Robotics in the Offshore Wind Sector

Dec 13, 2021
Daniel Mitchell, Jamie Blanche, Sam Harper, Theodore Lim, Ranjeetkumar Gupta, Osama Zaki, Wenshuo Tang, Valentin Robu, Simon Watson, David Flynn

Figure 1 for A Review: Challenges and Opportunities for Artificial Intelligence and Robotics in the Offshore Wind Sector
Figure 2 for A Review: Challenges and Opportunities for Artificial Intelligence and Robotics in the Offshore Wind Sector
Figure 3 for A Review: Challenges and Opportunities for Artificial Intelligence and Robotics in the Offshore Wind Sector
Figure 4 for A Review: Challenges and Opportunities for Artificial Intelligence and Robotics in the Offshore Wind Sector

A global trend in increasing wind turbine size and distances from shore is emerging within the rapidly growing offshore wind farm market. In the UK, the offshore wind sector produced its highest amount of electricity in the UK in 2019, a 19.6% increase on the year before. Currently, the UK is set to increase production further, targeting a 74.7% increase of installed turbine capacity as reflected in recent Crown Estate leasing rounds. With such tremendous growth, the sector is now looking to Robotics and Artificial Intelligence (RAI) in order to tackle lifecycle service barriers as to support sustainable and profitable offshore wind energy production. Today, RAI applications are predominately being used to support short term objectives in operation and maintenance. However, moving forward, RAI has the potential to play a critical role throughout the full lifecycle of offshore wind infrastructure, from surveying, planning, design, logistics, operational support, training and decommissioning. This paper presents one of the first systematic reviews of RAI for the offshore renewable energy sector. The state-of-the-art in RAI is analyzed with respect to offshore energy requirements, from both industry and academia, in terms of current and future requirements. Our review also includes a detailed evaluation of investment, regulation and skills development required to support the adoption of RAI. The key trends identified through a detailed analysis of patent and academic publication databases provide insights to barriers such as certification of autonomous platforms for safety compliance and reliability, the need for digital architectures for scalability in autonomous fleets, adaptive mission planning for resilient resident operations and optimization of human machine interaction for trusted partnerships between people and autonomous assistants.

* 36 figures, 49 pages 
Viaarxiv icon

CNN-Based Semantic Change Detection in Satellite Imagery

Jun 10, 2020
Ananya Gupta, Elisabeth Welburn, Simon Watson, Hujun Yin

Figure 1 for CNN-Based Semantic Change Detection in Satellite Imagery
Figure 2 for CNN-Based Semantic Change Detection in Satellite Imagery
Figure 3 for CNN-Based Semantic Change Detection in Satellite Imagery
Figure 4 for CNN-Based Semantic Change Detection in Satellite Imagery

Timely disaster risk management requires accurate road maps and prompt damage assessment. Currently, this is done by volunteers manually marking satellite imagery of affected areas but this process is slow and often error-prone. Segmentation algorithms can be applied to satellite images to detect road networks. However, existing methods are unsuitable for disaster-struck areas as they make assumptions about the road network topology which may no longer be valid in these scenarios. Herein, we propose a CNN-based framework for identifying accessible roads in post-disaster imagery by detecting changes from pre-disaster imagery. Graph theory is combined with the CNN output for detecting semantic changes in road networks with OpenStreetMap data. Our results are validated with data of a tsunami-affected region in Palu, Indonesia acquired from DigitalGlobe.

* Proceedings of International Conference on Artificial Neural Networks , 2019. pg-669-684  
Viaarxiv icon

Deep Learning-based Aerial Image Segmentation with Open Data for Disaster Impact Assessment

Jun 10, 2020
Ananya Gupta, Simon Watson, Hujun Yin

Figure 1 for Deep Learning-based Aerial Image Segmentation with Open Data for Disaster Impact Assessment
Figure 2 for Deep Learning-based Aerial Image Segmentation with Open Data for Disaster Impact Assessment
Figure 3 for Deep Learning-based Aerial Image Segmentation with Open Data for Disaster Impact Assessment
Figure 4 for Deep Learning-based Aerial Image Segmentation with Open Data for Disaster Impact Assessment

Satellite images are an extremely valuable resource in the aftermath of natural disasters such as hurricanes and tsunamis where they can be used for risk assessment and disaster management. In order to provide timely and actionable information for disaster response, in this paper a framework utilising segmentation neural networks is proposed to identify impacted areas and accessible roads in post-disaster scenarios. The effectiveness of pretraining with ImageNet on the task of aerial image segmentation has been analysed and performances of popular segmentation models compared. Experimental results show that pretraining on ImageNet usually improves the segmentation performance for a number of models. Open data available from OpenStreetMap (OSM) is used for training, forgoing the need for time-consuming manual annotation. The method also makes use of graph theory to update road network data available from OSM and to detect the changes caused by a natural disaster. Extensive experiments on data from the 2018 tsunami that struck Palu, Indonesia show the effectiveness of the proposed framework. ENetSeparable, with 30% fewer parameters compared to ENet, achieved comparable segmentation results to that of the state-of-the-art networks.

* Accepted in Neurocomputing, 2020 
Viaarxiv icon

Tree Annotations in LiDAR Data Using Point Densities and Convolutional Neural Networks

Jun 09, 2020
Ananya Gupta, Jonathan Byrne, David Moloney, Simon Watson, Hujun Yin

Figure 1 for Tree Annotations in LiDAR Data Using Point Densities and Convolutional Neural Networks
Figure 2 for Tree Annotations in LiDAR Data Using Point Densities and Convolutional Neural Networks
Figure 3 for Tree Annotations in LiDAR Data Using Point Densities and Convolutional Neural Networks
Figure 4 for Tree Annotations in LiDAR Data Using Point Densities and Convolutional Neural Networks

LiDAR provides highly accurate 3D point clouds. However, data needs to be manually labelled in order to provide subsequent useful information. Manual annotation of such data is time consuming, tedious and error prone, and hence in this paper we present three automatic methods for annotating trees in LiDAR data. The first method requires high density point clouds and uses certain LiDAR data attributes for the purpose of tree identification, achieving almost 90% accuracy. The second method uses a voxel-based 3D Convolutional Neural Network on low density LiDAR datasets and is able to identify most large trees accurately but struggles with smaller ones due to the voxelisation process. The third method is a scaled version of the PointNet++ method and works directly on outdoor point clouds and achieves an F_score of 82.1% on the ISPRS benchmark dataset, comparable to the state-of-the-art methods but with increased efficiency.

* IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 2, pp. 971-981, Feb. 2020  
Viaarxiv icon

3D Point Cloud Feature Explanations Using Gradient-Based Methods

Jun 09, 2020
Ananya Gupta, Simon Watson, Hujun Yin

Figure 1 for 3D Point Cloud Feature Explanations Using Gradient-Based Methods
Figure 2 for 3D Point Cloud Feature Explanations Using Gradient-Based Methods
Figure 3 for 3D Point Cloud Feature Explanations Using Gradient-Based Methods
Figure 4 for 3D Point Cloud Feature Explanations Using Gradient-Based Methods

Explainability is an important factor to drive user trust in the use of neural networks for tasks with material impact. However, most of the work done in this area focuses on image analysis and does not take into account 3D data. We extend the saliency methods that have been shown to work on image data to deal with 3D data. We analyse the features in point clouds and voxel spaces and show that edges and corners in 3D data are deemed as important features while planar surfaces are deemed less important. The approach is model-agnostic and can provide useful information about learnt features. Driven by the insight that 3D data is inherently sparse, we visualise the features learnt by a voxel-based classification network and show that these features are also sparse and can be pruned relatively easily, leading to more efficient neural networks. Our results show that the Voxception-ResNet model can be pruned down to 5\% of its parameters with negligible loss in accuracy.

* Accepted for IJCNN 2020 
Viaarxiv icon

Multi-Temporal Aerial Image Registration Using Semantic Features

Sep 19, 2019
Ananya Gupta, Yao Peng, Simon Watson, Hujun Yin

Figure 1 for Multi-Temporal Aerial Image Registration Using Semantic Features
Figure 2 for Multi-Temporal Aerial Image Registration Using Semantic Features
Figure 3 for Multi-Temporal Aerial Image Registration Using Semantic Features

A semantic feature extraction method for multitemporal high resolution aerial image registration is proposed in this paper. These features encode properties or information about temporally invariant objects such as roads and help deal with issues such as changing foliage in image registration, which classical handcrafted features are unable to address. These features are extracted from a semantic segmentation network and have shown good robustness and accuracy in registering aerial images across years and seasons in the experiments.

* Accepted to 20th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL) 
Viaarxiv icon

Multi-Temporal High Resolution Aerial Image Registration Using Semantic Features

Aug 30, 2019
Ananya Gupta, Yao Peng, Simon Watson, Hujun Yin

Figure 1 for Multi-Temporal High Resolution Aerial Image Registration Using Semantic Features
Figure 2 for Multi-Temporal High Resolution Aerial Image Registration Using Semantic Features
Figure 3 for Multi-Temporal High Resolution Aerial Image Registration Using Semantic Features

A new type of segmentation-based semantic feature (SegSF) for multi-temporal aerial image registration is proposed in this paper. These features encode information about temporally invariant objects such as roads which help deal with the issues such as changing foliage that classical handcrafted features are unable to address. These features are extracted from a semantic segmentation network and show good accuracy in registering aerial images across years and seasons.

* Under submission to 20th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL) 
Viaarxiv icon