Alert button
Picture for Javier Yu

Javier Yu

Alert button

NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics

May 16, 2023
Javier Yu, Jun En Low, Keiko Nagami, Mac Schwager

Figure 1 for NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics
Figure 2 for NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics
Figure 3 for NerfBridge: Bringing Real-time, Online Neural Radiance Field Training to Robotics

This work was presented at the IEEE International Conference on Robotics and Automation 2023 Workshop on Unconventional Spatial Representations. Neural radiance fields (NeRFs) are a class of implicit scene representations that model 3D environments from color images. NeRFs are expressive, and can model the complex and multi-scale geometry of real world environments, which potentially makes them a powerful tool for robotics applications. Modern NeRF training libraries can generate a photo-realistic NeRF from a static data set in just a few seconds, but are designed for offline use and require a slow pose optimization pre-computation step. In this work we propose NerfBridge, an open-source bridge between the Robot Operating System (ROS) and the popular Nerfstudio library for real-time, online training of NeRFs from a stream of images. NerfBridge enables rapid development of research on applications of NeRFs in robotics by providing an extensible interface to the efficient training pipelines and model libraries provided by Nerfstudio. As an example use case we outline a hardware setup that can be used NerfBridge to train a NeRF from images captured by a camera mounted to a quadrotor in both indoor and outdoor environments. For accompanying video https://youtu.be/EH0SLn-RcDg and code https://github.com/javieryu/nerf_bridge.

Viaarxiv icon

Distributed Optimization Methods for Multi-Robot Systems: Part II -- A Survey

Jan 26, 2023
Ola Shorinwa, Trevor Halsted, Javier Yu, Mac Schwager

Figure 1 for Distributed Optimization Methods for Multi-Robot Systems: Part II -- A Survey

Although the field of distributed optimization is well-developed, relevant literature focused on the application of distributed optimization to multi-robot problems is limited. This survey constitutes the second part of a two-part series on distributed optimization applied to multi-robot problems. In this paper, we survey three main classes of distributed optimization algorithms -- distributed first-order methods, distributed sequential convex programming methods, and alternating direction method of multipliers (ADMM) methods -- focusing on fully-distributed methods that do not require coordination or computation by a central computer. We describe the fundamental structure of each category and note important variations around this structure, designed to address its associated drawbacks. Further, we provide practical implications of noteworthy assumptions made by distributed optimization algorithms, noting the classes of robotics problems suitable for these algorithms. Moreover, we identify important open research challenges in distributed optimization, specifically for robotics problem.

* arXiv admin note: substantial text overlap with arXiv:2103.12840 
Viaarxiv icon

Distributed Optimization Methods for Multi-Robot Systems: Part I -- A Tutorial

Jan 26, 2023
Ola Shorinwa, Trevor Halsted, Javier Yu, Mac Schwager

Figure 1 for Distributed Optimization Methods for Multi-Robot Systems: Part I -- A Tutorial
Figure 2 for Distributed Optimization Methods for Multi-Robot Systems: Part I -- A Tutorial
Figure 3 for Distributed Optimization Methods for Multi-Robot Systems: Part I -- A Tutorial
Figure 4 for Distributed Optimization Methods for Multi-Robot Systems: Part I -- A Tutorial

Distributed optimization provides a framework for deriving distributed algorithms for a variety of multi-robot problems. This tutorial constitutes the first part of a two-part series on distributed optimization applied to multi-robot problems, which seeks to advance the application of distributed optimization in robotics. In this tutorial, we demonstrate that many canonical multi-robot problems can be cast within the distributed optimization framework, such as multi-robot simultaneous localization and planning (SLAM), multi-robot target tracking, and multi-robot task assignment problems. We identify three broad categories of distributed optimization algorithms: distributed first-order methods, distributed sequential convex programming, and the alternating direction method of multipliers (ADMM). We describe the basic structure of each category and provide representative algorithms within each category. We then work through a simulation case study of multiple drones collaboratively tracking a ground vehicle. We compare solutions to this problem using a number of different distributed optimization algorithms. In addition, we implement a distributed optimization algorithm in hardware on a network of Rasberry Pis communicating with XBee modules to illustrate robustness to the challenges of real-world communication networks.

Viaarxiv icon

DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning

Sep 17, 2021
Javier Yu, Joseph A. Vincent, Mac Schwager

Figure 1 for DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning
Figure 2 for DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning
Figure 3 for DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning
Figure 4 for DiNNO: Distributed Neural Network Optimization for Multi-Robot Collaborative Learning

We present a distributed algorithm that enables a group of robots to collaboratively optimize the parameters of a deep neural network model while communicating over a mesh network. Each robot only has access to its own data and maintains its own version of the neural network, but eventually learns a model that is as good as if it had been trained on all the data centrally. No robot sends raw data over the wireless network, preserving data privacy and ensuring efficient use of wireless bandwidth. At each iteration, each robot approximately optimizes an augmented Lagrangian function, then communicates the resulting weights to its neighbors, updates dual variables, and repeats. Eventually, all robots' local network weights reach a consensus. For convex objective functions, we prove this consensus is a global optimum. We compare our algorithm to two existing distributed deep neural network training algorithms in (i) an MNIST image classification task, (ii) a multi-robot implicit mapping task, and (iii) a multi-robot reinforcement learning task. In all of our experiments our method out performed baselines, and was able to achieve validation loss equivalent to centrally trained models. See \href{https://msl.stanford.edu/projects/dist_nn_train}{https://msl.stanford.edu/projects/dist\_nn\_train} for videos and a link to our GitHub repository.

* Submitted to IEEE Robotics and Automation Letters (with conference ICRA) 
Viaarxiv icon

A Survey of Distributed Optimization Methods for Multi-Robot Systems

Mar 23, 2021
Trevor Halsted, Ola Shorinwa, Javier Yu, Mac Schwager

Figure 1 for A Survey of Distributed Optimization Methods for Multi-Robot Systems
Figure 2 for A Survey of Distributed Optimization Methods for Multi-Robot Systems
Figure 3 for A Survey of Distributed Optimization Methods for Multi-Robot Systems
Figure 4 for A Survey of Distributed Optimization Methods for Multi-Robot Systems

Distributed optimization consists of multiple computation nodes working together to minimize a common objective function through local computation iterations and network-constrained communication steps. In the context of robotics, distributed optimization algorithms can enable multi-robot systems to accomplish tasks in the absence of centralized coordination. We present a general framework for applying distributed optimization as a module in a robotics pipeline. We survey several classes of distributed optimization algorithms and assess their practical suitability for multi-robot applications. We further compare the performance of different classes of algorithms in simulations for three prototypical multi-robot problem scenarios. The Consensus Alternating Direction Method of Multipliers (C-ADMM) emerges as a particularly attractive and versatile distributed optimization method for multi-robot systems.

* submitted to IEEE T-RO 
Viaarxiv icon

Distributed Multi-Target Tracking for Autonomous Vehicle Fleets

Apr 13, 2020
Ola Shorinwa, Javier Yu, Trevor Halsted, Alex Koufos, Mac Schwager

Figure 1 for Distributed Multi-Target Tracking for Autonomous Vehicle Fleets
Figure 2 for Distributed Multi-Target Tracking for Autonomous Vehicle Fleets
Figure 3 for Distributed Multi-Target Tracking for Autonomous Vehicle Fleets
Figure 4 for Distributed Multi-Target Tracking for Autonomous Vehicle Fleets

We present a scalable distributed target tracking algorithm based on the alternating direction method of multipliers that is well-suited for a fleet of autonomous cars communicating over a vehicle-to-vehicle network. Each sensing vehicle communicates with its neighbors to execute iterations of a Kalman filter-like update such that each agent's estimate approximates the centralized maximum a posteriori estimate without requiring the communication of measurements. We show that our method outperforms the Consensus Kalman Filter in recovering the centralized estimate given a fixed communication bandwidth. We also demonstrate the algorithm in a high fidelity urban driving simulator (CARLA), in which 50 autonomous cars connected on a time-varying communication network track the positions and velocities of 50 target vehicles using on-board cameras.

Viaarxiv icon