Alert button
Picture for Eric Cristofalo

Eric Cristofalo

Alert button

GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization

Oct 01, 2020
Eric Cristofalo, Eduardo Montijano, Mac Schwager

Figure 1 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization
Figure 2 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization
Figure 3 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization
Figure 4 for GeoD: Consensus-based Geodesic Distributed Pose Graph Optimization

We present a consensus-based distributed pose graph optimization algorithm for obtaining an estimate of the 3D translation and rotation of each pose in a pose graph, given noisy relative measurements between poses. The algorithm, called GeoD, implements a continuous time distributed consensus protocol to minimize the geodesic pose graph error. GeoD is distributed over the pose graph itself, with a separate computation thread for each node in the graph, and messages are passed only between neighboring nodes in the graph. We leverage tools from Lyapunov theory and multi-agent consensus to prove the convergence of the algorithm. We identify two new consistency conditions sufficient for convergence: pairwise consistency of relative rotation measurements, and minimal consistency of relative translation measurements. GeoD incorporates a simple one step distributed initialization to satisfy both conditions. We demonstrate GeoD on simulated and real world SLAM datasets. We compare to a centralized pose graph optimizer with an optimality certificate (SE-Sync) and a Distributed Gauss-Seidel (DGS) method. On average, GeoD converges 20 times more quickly than DGS to a value with 3.4 times less error when compared to the global minimum provided by SE-Sync. GeoD scales more favorably with graph size than DGS, converging over 100 times faster on graphs larger than 1000 poses. Lastly, we test GeoD on a multi-UAV vision-based SLAM scenario, where the UAVs estimate their pose trajectories in a distributed manner using the relative poses extracted from their on board camera images. We show qualitative performance that is better than either the centralized SE-Sync or the distributed DGS methods.

* Preprint for IEEE T-RO submission 
Viaarxiv icon

CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes

Mar 17, 2020
Pablo Pueyo, Eric Cristofalo, Eduardo Montijano, Mac Schwager

Figure 1 for CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes
Figure 2 for CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes
Figure 3 for CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes
Figure 4 for CinemAirSim: A Camera-Realistic Robotics Simulator for Cinematographic Purposes

Drones and Unmanned Aerial Vehicles (UAV's) are becoming increasingly popular in the film and entertainment industries in part because of their maneuverability and the dynamic shots and perspectives they enable. While there exists methods for controlling the position and orientation of the drones for visibility, other artistic elements of the filming process, such as focal blur and light control, remain unexplored in the robotics community. The lack of cinemetographic robotics solutions is partly due to the cost associated with the cameras and devices used in the filming industry, but also because state-of-the-art photo-realistic robotics simulators only utilize a full in-focus pinhole camera model which does incorporate these desired artistic attributes. To overcome this, the main contribution of this work is to endow the well-known drone simulator, AirSim, with a cinematic camera as well as extended its API to control all of its parameters in real time, including various filming lenses and common cinematographic properties. In this paper, we detail the implementation of our AirSim modification, CinemAirSim, present examples that illustrate the potential of the new tool, and highlight the new research opportunities that the use of cinematic cameras can bring to research in robotics and control. https://github.com/ppueyor/CinematicAirSim

Viaarxiv icon

AirSim Drone Racing Lab

Mar 12, 2020
Ratnesh Madaan, Nicholas Gyde, Sai Vemprala, Matthew Brown, Keiko Nagami, Tim Taubner, Eric Cristofalo, Davide Scaramuzza, Mac Schwager, Ashish Kapoor

Figure 1 for AirSim Drone Racing Lab
Figure 2 for AirSim Drone Racing Lab
Figure 3 for AirSim Drone Racing Lab
Figure 4 for AirSim Drone Racing Lab

Autonomous drone racing is a challenging research problem at the intersection of computer vision, planning, state estimation, and control. We introduce AirSim Drone Racing Lab, a simulation framework for enabling fast prototyping of algorithms for autonomy and enabling machine learning research in this domain, with the goal of reducing the time, money, and risks associated with field robotics. Our framework enables generation of racing tracks in multiple photo-realistic environments, orchestration of drone races, comes with a suite of gate assets, allows for multiple sensor modalities (monocular, depth, neuromorphic events, optical flow), different camera models, and benchmarking of planning, control, computer vision, and learning-based algorithms. We used our framework to host a simulation based drone racing competition at NeurIPS 2019. The competition binaries are available at our github repository.

* 14 pages, 6 figures 
Viaarxiv icon

A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing

Jan 26, 2018
Riccardo Spica, Davide Falanga, Eric Cristofalo, Eduardo Montijano, Davide Scaramuzza, Mac Schwager

Figure 1 for A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing
Figure 2 for A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing
Figure 3 for A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing
Figure 4 for A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing

To be successful in multi-player drone racing, a player must not only follow the race track in an optimal way, but also compete with other drones through strategic blocking, faking, and opportunistic passing while avoiding collisions. Since unveiling one's own strategy to the adversaries is not desirable, this requires each player to independently predict the other players' future actions. Nash equilibria are a powerful tool to model this and similar multi-agent coordination problems in which the absence of communication impedes full coordination between the agents. In this paper, we propose a novel receding horizon planning algorithm that, exploiting sensitivity analysis within an iterated best response computational scheme, can approximate Nash equilibria in real time. We also describe a vision-based pipeline that allows each player to estimate its opponent's relative position. We demonstrate that our solution effectively competes against alternative strategies in a large number of drone racing simulations. Hardware experiments with onboard vision sensing prove the practicality of our strategy.

Viaarxiv icon

Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception

May 02, 2017
Eric Cristofalo, Zijian Wang

Figure 1 for Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception
Figure 2 for Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception
Figure 3 for Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception
Figure 4 for Out-of-focus: Learning Depth from Image Bokeh for Robotic Perception

In this project, we propose a novel approach for estimating depth from RGB images. Traditionally, most work uses a single RGB image to estimate depth, which is inherently difficult and generally results in poor performance, even with thousands of data examples. In this work, we alternatively use multiple RGB images that were captured while changing the focus of the camera's lens. This method leverages the natural depth information correlated to the different patterns of clarity/blur in the sequence of focal images, which helps distinguish objects at different depths. Since no such data set exists for learning this mapping, we collect our own data set using customized hardware. We then use a convolutional neural network for learning the depth from the stacked focal images. Comparative studies were conducted on both a standard RGBD data set and our own data set (learning from both single and multiple images), and results verified that stacked focal images yield better depth estimation than using just single RGB image.

Viaarxiv icon