Alert button
Picture for Eduardo Montijano

Eduardo Montijano

Alert button

Instituto de Investigación en Ingeniería de Aragón, Universidad de Zaragoza, Spain

Perceptual Factors for Environmental Modeling in Robotic Active Perception

Sep 19, 2023
David Morilla-Cabello, Jonas Westheider, Marija Popovic, Eduardo Montijano

Figure 1 for Perceptual Factors for Environmental Modeling in Robotic Active Perception
Figure 2 for Perceptual Factors for Environmental Modeling in Robotic Active Perception
Figure 3 for Perceptual Factors for Environmental Modeling in Robotic Active Perception
Figure 4 for Perceptual Factors for Environmental Modeling in Robotic Active Perception

Accurately assessing the potential value of new sensor observations is a critical aspect of planning for active perception. This task is particularly challenging when reasoning about high-level scene understanding using measurements from vision-based neural networks. Due to appearance-based reasoning, the measurements are susceptible to several environmental effects such as the presence of occluders, variations in lighting conditions, and redundancy of information due to similarity in appearance between nearby viewpoints. To address this, we propose a new active perception framework incorporating an arbitrary number of perceptual effects in planning and fusion. Our method models the correlation with the environment by a set of general functions termed perceptual factors to construct a perceptual map, which quantifies the aggregated influence of the environment on candidate viewpoints. This information is seamlessly incorporated into the planning and fusion processes by adjusting the uncertainty associated with measurements to weigh their contributions. We evaluate our perceptual maps in a simulated environment that reproduces environmental conditions common in robotics applications. Our results show that, by accounting for environmental effects within our perceptual maps, we improve in the state estimation by correctly selecting the viewpoints and considering the measurement noise correctly when affected by environmental factors. We furthermore deploy our approach on a ground robot to showcase its applicability for real-world active perception missions.

* 7 pages, 9 figures, under review for IEEE ICRA 2023 
Viaarxiv icon

Learning to Identify Graphs from Node Trajectories in Multi-Robot Networks

Jul 10, 2023
Eduardo Sebastian, Thai Duong, Nikolay Atanasov, Eduardo Montijano, Carlos Sagues

Figure 1 for Learning to Identify Graphs from Node Trajectories in Multi-Robot Networks
Figure 2 for Learning to Identify Graphs from Node Trajectories in Multi-Robot Networks
Figure 3 for Learning to Identify Graphs from Node Trajectories in Multi-Robot Networks
Figure 4 for Learning to Identify Graphs from Node Trajectories in Multi-Robot Networks

The graph identification problem consists of discovering the interactions among nodes in a network given their state/feature trajectories. This problem is challenging because the behavior of a node is coupled to all the other nodes by the unknown interaction model. Besides, high-dimensional and nonlinear state trajectories make difficult to identify if two nodes are connected. Current solutions rely on prior knowledge of the graph topology and the dynamic behavior of the nodes, and hence, have poor generalization to other network configurations. To address these issues, we propose a novel learning-based approach that combines (i) a strongly convex program that efficiently uncovers graph topologies with global convergence guarantees and (ii) a self-attention encoder that learns to embed the original state trajectories into a feature space and predicts appropriate regularizers for the optimization program. In contrast to other works, our approach can identify the graph topology of unseen networks with new configurations in terms of number of nodes, connectivity or state trajectories. We demonstrate the effectiveness of our approach in identifying graphs in multi-robot formation and flocking tasks.

* Under review at IEEE MRS 2023 
Viaarxiv icon

A Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians

Apr 14, 2023
Sara Casao, Andrés Otero, Álvaro Serra-Gómez, Ana C. Murillo, Javier Alonso-Mora, Eduardo Montijano

Figure 1 for A Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians
Figure 2 for A Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians
Figure 3 for A Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians
Figure 4 for A Framework for Fast Prototyping of Photo-realistic Environments with Multiple Pedestrians

Robotic applications involving people often require advanced perception systems to better understand complex real-world scenarios. To address this challenge, photo-realistic and physics simulators are gaining popularity as a means of generating accurate data labeling and designing scenarios for evaluating generalization capabilities, e.g., lighting changes, camera movements or different weather conditions. We develop a photo-realistic framework built on Unreal Engine and AirSim to generate easily scenarios with pedestrians and mobile robots. The framework is capable to generate random and customized trajectories for each person and provides up to 50 ready-to-use people models along with an API for their metadata retrieval. We demonstrate the usefulness of the proposed framework with a use case of multi-target tracking, a popular problem in real pedestrian scenarios. The notable feature variability in the obtained perception data is presented and evaluated.

Viaarxiv icon

Robust Fusion for Bayesian Semantic Mapping

Mar 14, 2023
David Morilla-Cabello, Lorenzo Mur-Labadia, Ruben Martinez-Cantin, Eduardo Montijano

Figure 1 for Robust Fusion for Bayesian Semantic Mapping
Figure 2 for Robust Fusion for Bayesian Semantic Mapping
Figure 3 for Robust Fusion for Bayesian Semantic Mapping
Figure 4 for Robust Fusion for Bayesian Semantic Mapping

The integration of semantic information in a map allows robots to understand better their environment and make high-level decisions. In the last few years, neural networks have shown enormous progress in their perception capabilities. However, when fusing multiple observations from a neural network in a semantic map, its inherent overconfidence with unknown data gives too much weight to the outliers and decreases the robustness of the resulting map. In this work, we propose a novel robust fusion method to combine multiple Bayesian semantic predictions. Our method uses the uncertainty estimation provided by a Bayesian neural network to calibrate the way in which the measurements are fused. This is done by regularizing the observations to mitigate the problem of overconfident outlier predictions and using the epistemic uncertainty to weigh their influence in the fusion, resulting in a different formulation of the probability distributions. We validate our robust fusion strategy by performing experiments on photo-realistic simulated environments and real scenes. In both cases, we use a network trained on different data to expose the model to varying data distributions. The results show that considering the model's uncertainty and regularizing the probability distribution of the observations distribution results in a better semantic segmentation performance and more robustness to outliers, compared with other methods.

* 7 pages, 7 figures, under review at IEEE IROS 2023 
Viaarxiv icon

Active Classification of Moving Targets with Learned Control Policies

Dec 07, 2022
Álvaro Serra-Gómez, Eduardo Montijano, Wendelin Böhmer, Javier Alonso-Mora

Figure 1 for Active Classification of Moving Targets with Learned Control Policies
Figure 2 for Active Classification of Moving Targets with Learned Control Policies
Figure 3 for Active Classification of Moving Targets with Learned Control Policies
Figure 4 for Active Classification of Moving Targets with Learned Control Policies

In this paper, we consider the problem where a drone has to collect semantic information to classify multiple moving targets. In particular, we address the challenge of computing control inputs that move the drone to informative viewpoints, position and orientation, when the information is extracted using a "black-box" classifier, e.g., a deep learning neural network. These algorithms typically lack of analytical relationships between the viewpoints and their associated outputs, preventing their use in information-gathering schemes. To fill this gap, we propose a novel attention-based architecture, trained via Reinforcement Learning (RL), that outputs the next viewpoint for the drone favoring the acquisition of evidence from as many unclassified targets as possible while reasoning about their movement, orientation, and occlusions. Then, we use a low-level MPC controller to move the drone to the desired viewpoint taking into account its actual dynamics. We show that our approach not only outperforms a variety of baselines but also generalizes to scenarios unseen during training. Additionally, we show that the network scales to large numbers of targets and generalizes well to different movement dynamics of the targets.

* 8 pages, 6 figures, Submitted to IEEE RA-L 
Viaarxiv icon

Multi-robot Implicit Control of Massive Herds

Sep 20, 2022
Eduardo Sebastian, Eduardo Montijano, Carlos Sagues

Figure 1 for Multi-robot Implicit Control of Massive Herds
Figure 2 for Multi-robot Implicit Control of Massive Herds
Figure 3 for Multi-robot Implicit Control of Massive Herds
Figure 4 for Multi-robot Implicit Control of Massive Herds

This paper solves the problem of herding countless evaders by means of a few robots. The objective is to steer all the evaders towards a desired tracking reference while avoiding escapes. The problem is very challenging due to the highly complex repulsive evaders' dynamics and the underdetermined states to control. We propose a solution that is based on Implicit Control and a novel dynamic assignment strategy to select the evaders to be directly controlled. The former is a general technique that explicitly computes control inputs even in highly complex input-nonaffine dynamics. The latter is built upon a convex-hull dynamic clustering inspired by the Voronoi tessellation problem. The combination of both allows to choose the best evaders to directly control, while the others are indirectly controlled by exploiting the repulsive interactions among them. Simulations show that massive herds can be herd throughout complex patterns by means of a few herders.

* E. Sebastian, E. Montijano and C. Sagues,"Multi-robot Implicit Control of Massive Herds'', Fifth Iberian Robotics Conference (ROBOT22), 2022 
Viaarxiv icon

LEMURS: Learning Distributed Multi-Robot Interactions

Sep 20, 2022
Eduardo Sebastian, Thai Duong, Nikolay Atanasov, Eduardo Montijano, Carlos Sagues

Figure 1 for LEMURS: Learning Distributed Multi-Robot Interactions
Figure 2 for LEMURS: Learning Distributed Multi-Robot Interactions
Figure 3 for LEMURS: Learning Distributed Multi-Robot Interactions

This paper presents LEMURS, an algorithm for learning scalable multi-robot control policies from cooperative task demonstrations. We propose a port-Hamiltonian description of the multi-robot system to exploit universal physical constraints in interconnected systems and achieve closed-loop stability. We represent a multi-robot control policy using an architecture that combines self-attention mechanisms and neural ordinary differential equations. The former handles time-varying communication in the robot team, while the latter respects the continuous-time robot dynamics. Our representation is distributed by construction, enabling the learned control policies to be deployed in robot teams of different sizes. We demonstrate that LEMURS can learn interactions and cooperative behaviors from demonstrations of multi-agent navigation and flocking tasks.

* In Submission 
Viaarxiv icon

CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous Cinematography

Apr 08, 2021
Pablo Pueyo, Eduardo Montijano, Ana C. Murillo, Mac Schwager

Figure 1 for CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous Cinematography
Figure 2 for CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous Cinematography
Figure 3 for CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous Cinematography
Figure 4 for CineMPC: Controlling Camera Intrinsics and Extrinsics for Autonomous Cinematography

We present CineMPC, an algorithm to autonomously control a UAV-borne video camera in a nonlinear MPC loop. CineMPC controls both the position and orientation of the camera-the camera extrinsics-as well as the lens focal length, focal distance, and aperture-the camera intrinsics. While some existing solutions autonomously control the position and orientation of the camera, no existing solutions also control the intrinsic parameters, which are essential tools for rich cinematographic expression. The intrinsic parameters control the parts of the scene that are focused or blurred, and the viewers' perception of depth in the scene. Cinematographers commonly use the camera intrinsics to direct the viewers' attention through the use of focus, to convey suspense through telephoto views, inspire awe through wide-angle views, and generally to convey an emotionally rich viewing experience. Our algorithm can use any existing approach to detect the subjects in the scene, and tracks those subjects throughout a user-specified desired camera trajectory that includes camera intrinsics. CineMPC closes the loop from camera images to UAV trajectory in order to follow the desired relative trajectory as the subjects move through the scene. The cinematographer can use CineMPC to autonomously record scenes using the full array of cinematographic tools for artistic expression.

Viaarxiv icon

Distributed Multi-Target Tracking in Camera Networks

Oct 26, 2020
Sara Casao, Ana C. Murillo, Eduardo Montijano

Figure 1 for Distributed Multi-Target Tracking in Camera Networks
Figure 2 for Distributed Multi-Target Tracking in Camera Networks
Figure 3 for Distributed Multi-Target Tracking in Camera Networks
Figure 4 for Distributed Multi-Target Tracking in Camera Networks

Most recent works on multi-target tracking with multiple cameras focus on centralized systems. In contrast, this paper presents a multi-target tracking approach implemented in a distributed camera network. The advantages of distributed systems lie in lighter communication management, greater robustness to failures and local decision making. On the other hand, data association and information fusion are more challenging than in a centralized setup, mostly due to the lack of global and complete information. The proposed algorithm boosts the benefits of the Distributed-Consensus Kalman Filter with the support of a re-identification network and a distributed tracker manager module to facilitate consistent information. These techniques complement each other and facilitate the cross-camera data association in a simple and effective manner. We evaluate the whole system with known public data sets under different conditions demonstrating the advantages of combining all the modules. In addition, we compare our algorithm to some existing centralized tracking methods, outperforming their behavior in terms of accuracy and bandwidth usage.

Viaarxiv icon

GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization

Oct 01, 2020
Eric Cristofalo, Eduardo Montijano, Mac Schwager

Figure 1 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization
Figure 2 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization
Figure 3 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization
Figure 4 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization

We present a consensus-based distributed pose graph optimization algorithm for obtaining an estimate of the 3D translation and rotation of each pose in a pose graph, given noisy relative measurements between poses. The algorithm, called GeoD, implements a continuous time distributed consensus protocol to minimize the geodesic pose graph error. GeoD is distributed over the pose graph itself, with a separate computation thread for each node in the graph, and messages are passed only between neighboring nodes in the graph. We leverage tools from Lyapunov theory and multi-agent consensus to prove the convergence of the algorithm. We identify two new consistency conditions sufficient for convergence: pairwise consistency of relative rotation measurements, and minimal consistency of relative translation measurements. GeoD incorporates a simple one step distributed initialization to satisfy both conditions. We demonstrate GeoD on simulated and real world SLAM datasets. We compare to a centralized pose graph optimizer with an optimality certificate (SE-Sync) and a Distributed Gauss-Seidel (DGS) method. On average, GeoD converges 20 times more quickly than DGS to a value with 3.4 times less error when compared to the global minimum provided by SE-Sync. GeoD scales more favorably with graph size than DGS, converging over 100 times faster on graphs larger than 1000 poses. Lastly, we test GeoD on a multi-UAV vision-based SLAM scenario, where the UAVs estimate their pose trajectories in a distributed manner using the relative poses extracted from their on board camera images. We show qualitative performance that is better than either the centralized SE-Sync or the distributed DGS methods.

* Preprint for IEEE T-RO submission 
Viaarxiv icon