The agility and versatility offered by UAV platforms still encounter obstacles for full exploitation in industrial applications due to their indoor usage limitations. A significant challenge in this sense is finding a reliable and cost-effective way to localize aerial vehicles in a GNSS-denied environment. In this paper, we focus on the visual-based positioning paradigm: high accuracy in UAVs position and orientation estimation is achieved by leveraging the potentials offered by a dense and size-heterogenous map of tags. In detail, we propose an efficient visual odometry procedure focusing on hierarchical tags selection, outliers removal, and multi-tag estimation fusion, to facilitate the visual-inertial reconciliation. Experimental results show the validity of the proposed localization architecture as compared to the state of the art.
Visual sensor networks constitute a fundamental class of distributed sensing systems, with unique complexity and performance research subjects. One of these novel challenges is represented by the identification of the network stimulation model (SM), which emerges when a set of detectable events trigger different subsets of the cameras. In this direction, the formulation of the related SM identification problem is proposed, along with a proper network observations generation method. Consequently, an approach based on deep embedded features and soft clustering is leveraged to solve the presented identification problem. In detail, the Gaussian Mixture Modeling is employed to provide a suitable description for data distribution and an autoencoder is used to reduce undesired effects due to the so-called curse of dimensionality. Hence, it is shown that a SM can be learnt by solving Maximum A-Posteriori estimation on the encoded features belonging to a space with lower dimensionality. Lastly, numerical results are reported to validate the devised estimation algorithm.
Active Position Estimation (APE) is the task of localizing one or more targets using one or more sensing platforms. APE is a key task for search and rescue missions, wildlife monitoring, source term estimation, and collaborative mobile robotics. Success in APE depends on the level of cooperation of the sensing platforms, their number, their degrees of freedom and the quality of the information gathered. APE control laws enable active sensing by satisfying either pure-exploitative or pure-explorative criteria. The former minimizes the uncertainty on position estimation; whereas the latter drives the platform closer to its task completion. In this paper, we define the main elements of APE to systematically classify and critically discuss the state of the art in this domain. We also propose a reference framework as a formalism to classify APE-related solutions. Overall, this survey explores the principal challenges and envisages the main research directions in the field of autonomous perception systems for localization tasks. It is also beneficial to promote the development of robust active sensing methods for search and tracking applications.
Research on connected vehicles represents a continuously evolving technological domain, fostered by the emerging Internet of Things (IoT) paradigm and the recent advances in intelligent transportation systems. Nowadays, vehicles are platforms capable of generating, receiving and automatically act based on large amount of data. In the context of assisted driving, connected vehicle technology provides real-time information about the surrounding traffic conditions. Such information is expected to improve drivers' quality of life, for example, by adopting decision making strategies according to the current parking availability status. In this context, we propose an online and adaptive scheme for parking availability mapping. Specifically, we adopt an information-seeking active sensing approach to select the incoming data, thus preserving the onboard storage and processing resources; then, we estimate the parking availability through Gaussian Process Regression. We compare the proposed algorithm with several baselines, which attain inferior performance in terms of mapping convergence speed and adaptivity capabilities; moreover, the proposed approach comes at the cost of a very small computational demand.
Multi-modal Probabilistic Active Sensing (MMPAS) uses sensor fusion and probabilistic models to control the perception process of robotic sensing platforms. MMPAS is successfully employed in environmental exploration, collaborative mobile robotics, and target tracking, being fostered by the high performance guarantees on autonomous perception. In this context, we propose a bi-Radio-Visual PAS scheme to solve the transmitter discovery problem. Specifically, we firstly exploit the correlation between radio and visual measurements to learn a target detection model in a self-supervised manner. Then, the model is combined with antenna radiation anisotropies into a Bayesian Optimization framework that controls the platform. We show that the proposed algorithm attains an accuracy of 92%, overcoming two other probabilistic active sensing baselines.
Research on wireless sensors represents a continuously evolving technological domain thanks to their high potentialities: flexibility and scalability, fast and economical deployment, pervasiveness in industrial, civil and domestic contexts. However,the maintenance costs and the sensors reliability are strongly affected by the battery lifetime, which may limit their use and exploitation. In this paper we consider a scenario in which a wireless smart camera, equipped with a low-energy radio receiver, is used to visually detect a moving radio-emitting target. To preserve the camera lifetime, we design a probabilistic energy-aware controller that regulates the camera state. The radio signal strength at the receiver side is used to predict the target detectability, via self-supervised Gaussian Process Regression combined with Recursive Bayesian Estimation. Both numerical and experimental results validate the proposed approach in terms of target detection accuracy and energy consumption.
While automated driving technology has achieved a tremendous progress, the scalable and rigorous testing and verification of safe automated and autonomous driving vehicles remain challenging. This paper proposes a learning-based falsification framework for testing the implementation of an automated or self-driving function in simulation. We assume that the function specification is associated with a violation metric on possible scenarios. Prior knowledge is incorporated to limit the scenario parameter variance and in a model-based falsifier to guide and improve the learning process. For an exemplary adaptive cruise controller, the presented framework yields non-trivial falsifying scenarios with higher reward, compared to scenarios obtained by purely learning-based or purely model-based falsification approaches.
Marker-based motion capture (MoCap) systems can be composed by several dozens of cameras with the purpose of reconstructing the trajectories of hundreds of targets. With a large amount of cameras it becomes interesting to determine the optimal reconstruction strategy. For such aim it is of fundamental importance to understand the information provided by different camera measurements and how they are combined, i.e. how the reconstruction error changes by considering different cameras. In this work, first, an approximation of the reconstruction error variance is derived. The results obtained in some simulations suggest that the proposed strategy allows to obtain a good approximation of the real error variance with significant reduction of the computational time.