Abstract:Accurate localisation in planetary robotics enables the advanced autonomy required to support the increased scale and scope of future missions. The successes of the Ingenuity helicopter and multiple planetary orbiters lay the groundwork for future missions that use ground-aerial robotic teams. In this paper, we consider rovers using machine learning to localise themselves in a local aerial map using limited field-of-view monocular ground-view RGB images as input. A key consideration for machine learning methods is that real space data with ground-truth position labels suitable for training is scarce. In this work, we propose a novel method of localising rovers in an aerial map using cross-view-localising dual-encoder deep neural networks. We leverage semantic segmentation with vision foundation models and high volume synthetic data to bridge the domain gap to real images. We also contribute a new cross-view dataset of real-world rover trajectories with corresponding ground-truth localisation data captured in a planetary analogue facility, plus a high volume dataset of analogous synthetic image pairs. Using particle filters for state estimation with the cross-view networks allows accurate position estimation over simple and complex trajectories based on sequences of ground-view images.
Abstract:Different models of dark matter can alter the distribution of mass in galaxy clusters in a variety of ways. However, so can uncertain astrophysical feedback mechanisms. Here we present a Machine Learning method that ''learns'' how the impact of dark matter self-interactions differs from that of astrophysical feedback in order to break this degeneracy and make inferences on dark matter. We train a Convolutional Neural Network on images of galaxy clusters from hydro-dynamic simulations. In the idealised case our algorithm is 80% accurate at identifying if a galaxy cluster harbours collisionless dark matter, dark matter with ${\sigma}_{\rm DM}/m = 0.1$cm$^2/$g or with ${\sigma}_{DM}/m = 1$cm$^2$/g. Whilst we find adding X-ray emissivity maps does not improve the performance in differentiating collisional dark matter, it does improve the ability to disentangle different models of astrophysical feedback. We include noise to resemble data expected from Euclid and Chandra and find our model has a statistical error of < 0.01cm$^2$/g and that our algorithm is insensitive to shape measurement bias and photometric redshift errors. This method represents a new way to analyse data from upcoming telescopes that is an order of magnitude more precise and many orders faster, enabling us to explore the dark matter parameter space like never before.
Abstract:The ability of neural radiance fields or NeRFs to conduct accurate 3D modelling has motivated application of the technique to scene representation. Previous approaches have mainly followed a centralised learning paradigm, which assumes that all training images are available on one compute node for training. In this paper, we consider training NeRFs in a federated manner, whereby multiple compute nodes, each having acquired a distinct set of observations of the overall scene, learn a common NeRF in parallel. This supports the scenario of cooperatively modelling a scene using multiple agents. Our contribution is the first federated learning algorithm for NeRF, which splits the training effort across multiple compute nodes and obviates the need to pool the images at a central node. A technique based on low-rank decomposition of NeRF layers is introduced to reduce bandwidth consumption to transmit the model parameters for aggregation. Transferring compressed models instead of the raw data also contributes to the privacy of the data collecting agents.