Alert button
Picture for Will Maddern

Will Maddern

Alert button

Real-time Kinematic Ground Truth for the Oxford RobotCar Dataset

Feb 24, 2020
Will Maddern, Geoffrey Pascoe, Matthew Gadd, Dan Barnes, Brian Yeomans, Paul Newman

Figure 1 for Real-time Kinematic Ground Truth for the Oxford RobotCar Dataset
Figure 2 for Real-time Kinematic Ground Truth for the Oxford RobotCar Dataset

We describe the release of reference data towards a challenging long-term localisation and mapping benchmark based on the large-scale Oxford RobotCar Dataset. The release includes 72 traversals of a route through Oxford, UK, gathered in all illumination, weather and traffic conditions, and is representative of the conditions an autonomous vehicle would be expected to operate reliably in. Using post-processed raw GPS, IMU, and static GNSS base station recordings, we have produced a globally-consistent centimetre-accurate ground truth for the entire year-long duration of the dataset. Coupled with a planned online benchmarking service, we hope to enable quantitative evaluation and comparison of different localisation and mapping approaches focusing on long-term autonomy for road vehicles in urban environments challenged by changing weather.

* Dataset website: https://robotcar-dataset.robots.ox.ac.uk/ 
Viaarxiv icon

Distant Vehicle Detection Using Radar and Vision

Jan 30, 2019
Simon Chadwick, Will Maddern, Paul Newman

Figure 1 for Distant Vehicle Detection Using Radar and Vision
Figure 2 for Distant Vehicle Detection Using Radar and Vision
Figure 3 for Distant Vehicle Detection Using Radar and Vision
Figure 4 for Distant Vehicle Detection Using Radar and Vision

For autonomous vehicles to be able to operate successfully they need to be aware of other vehicles with sufficient time to make safe, stable plans. Given the possible closing speeds between two vehicles, this necessitates the ability to accurately detect distant vehicles. Many current image-based object detectors using convolutional neural networks exhibit excellent performance on existing datasets such as KITTI. However, the performance of these networks falls when detecting small (distant) objects. We demonstrate that incorporating radar data can boost performance in these difficult situations. We also introduce an efficient automated method for training data generation using cameras of different focal lengths.

Viaarxiv icon

Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions

Apr 04, 2018
Torsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, Marc Pollefeys, Josef Sivic, Fredrik Kahl, Tomas Pajdla

Figure 1 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Figure 2 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Figure 3 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions
Figure 4 for Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions

Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera pose estimates. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on 6DOF camera pose estimation accuracy through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions, showing that long-term localization is far from solved, and propose promising avenues for future work, including sequence-based localization approaches and the need for better local features. Our benchmark is available at visuallocalization.net.

* Accepted to CVPR 2018 as a spotlight 
Viaarxiv icon

Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer

Mar 09, 2018
Horia Porav, Will Maddern, Paul Newman

Figure 1 for Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer
Figure 2 for Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer
Figure 3 for Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer
Figure 4 for Adversarial Training for Adverse Conditions: Robust Metric Localisation using Appearance Transfer

We present a method of improving visual place recognition and metric localisation under very strong appear- ance change. We learn an invertable generator that can trans- form the conditions of images, e.g. from day to night, summer to winter etc. This image transforming filter is explicitly designed to aid and abet feature-matching using a new loss based on SURF detector and dense descriptor maps. A network is trained to output synthetic images optimised for feature matching given only an input RGB image, and these generated images are used to localize the robot against a previously built map using traditional sparse matching approaches. We benchmark our results using multiple traversals of the Oxford RobotCar Dataset over a year-long period, using one traversal as a map and the other to localise. We show that this method significantly improves place recognition and localisation under changing and adverse conditions, while reducing the number of mapping runs needed to successfully achieve reliable localisation.

* Accepted at ICRA2018 
Viaarxiv icon

Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

Mar 05, 2018
Dan Barnes, Will Maddern, Geoffrey Pascoe, Ingmar Posner

Figure 1 for Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments
Figure 2 for Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments
Figure 3 for Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments
Figure 4 for Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

We present a self-supervised approach to ignoring "distractors" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network. At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching. Our approach yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90% of the image is obscured by dynamic, independently moving objects. We evaluate our robust VO methods on more than 400km of driving from the Oxford RobotCar Dataset and demonstrate reduced odometry drift and significantly improved egomotion estimation in the presence of large moving vehicles in urban traffic.

* International Conference on Robotics and Automation (ICRA), 2018. Video summary: http://youtu.be/ebIrBn_nc-k 
Viaarxiv icon

Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy

Nov 17, 2017
Dan Barnes, Will Maddern, Ingmar Posner

Figure 1 for Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy
Figure 2 for Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy
Figure 3 for Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy
Figure 4 for Find Your Own Way: Weakly-Supervised Segmentation of Path Proposals for Urban Autonomy

We present a weakly-supervised approach to segmenting proposed drivable paths in images with the goal of autonomous driving in complex urban environments. Using recorded routes from a data collection vehicle, our proposed method generates vast quantities of labelled images containing proposed paths and obstacles without requiring manual annotation, which we then use to train a deep semantic segmentation network. With the trained network we can segment proposed paths and obstacles at run-time using a vehicle equipped with only a monocular camera without relying on explicit modelling of road or lane markings. We evaluate our method on the large-scale KITTI and Oxford RobotCar datasets and demonstrate reliable path proposal and obstacle segmentation in a wide variety of environments under a range of lighting, weather and traffic conditions. We illustrate how the method can generalise to multiple path proposals at intersections and outline plans to incorporate the system into a framework for autonomous urban driving.

* International Conference on Robotics and Automation (ICRA), 2017. Video summary: http://youtu.be/rbZ8ck_1nZk 
Viaarxiv icon