Alert button
Picture for Reza Mahjourian

Reza Mahjourian

Alert button

Robotic Table Tennis: A Case Study into a High Speed Learning System

Sep 06, 2023
David B. D'Ambrosio, Jonathan Abelian, Saminda Abeyruwan, Michael Ahn, Alex Bewley, Justin Boyd, Krzysztof Choromanski, Omar Cortes, Erwin Coumans, Tianli Ding, Wenbo Gao, Laura Graesser, Atil Iscen, Navdeep Jaitly, Deepali Jain, Juhana Kangaspunta, Satoshi Kataoka, Gus Kouretas, Yuheng Kuang, Nevena Lazic, Corey Lynch, Reza Mahjourian, Sherry Q. Moore, Thinh Nguyen, Ken Oslund, Barney J Reed, Krista Reymann, Pannag R. Sanketi, Anish Shankar, Pierre Sermanet, Vikas Sindhwani, Avi Singh, Vincent Vanhoucke, Grace Vesom, Peng Xu

Figure 1 for Robotic Table Tennis: A Case Study into a High Speed Learning System
Figure 2 for Robotic Table Tennis: A Case Study into a High Speed Learning System
Figure 3 for Robotic Table Tennis: A Case Study into a High Speed Learning System
Figure 4 for Robotic Table Tennis: A Case Study into a High Speed Learning System

We present a deep-dive into a real-world robotic learning system that, in previous work, was shown to be capable of hundreds of table tennis rallies with a human and has the ability to precisely return the ball to desired targets. This system puts together a highly optimized perception subsystem, a high-speed low-latency robot controller, a simulation paradigm that can prevent damage in the real world and also train policies for zero-shot transfer, and automated real world environment resets that enable autonomous training and evaluation on physical robots. We complement a complete system description, including numerous design decisions that are typically not widely disseminated, with a collection of studies that clarify the importance of mitigating various sources of latency, accounting for training and deployment distribution shifts, robustness of the perception system, sensitivity to policy hyper-parameters, and choice of action space. A video demonstrating the components of the system and details of experimental results can be found at https://youtu.be/uFcnWjB42I0.

* Published and presented at Robotics: Science and Systems (RSS2023) 
Viaarxiv icon

Instance Segmentation with Cross-Modal Consistency

Oct 14, 2022
Alex Zihao Zhu, Vincent Casser, Reza Mahjourian, Henrik Kretzschmar, Sören Pirk

Figure 1 for Instance Segmentation with Cross-Modal Consistency
Figure 2 for Instance Segmentation with Cross-Modal Consistency
Figure 3 for Instance Segmentation with Cross-Modal Consistency
Figure 4 for Instance Segmentation with Cross-Modal Consistency

Segmenting object instances is a key task in machine perception, with safety-critical applications in robotics and autonomous driving. We introduce a novel approach to instance segmentation that jointly leverages measurements from multiple sensor modalities, such as cameras and LiDAR. Our method learns to predict embeddings for each pixel or point that give rise to a dense segmentation of the scene. Specifically, our technique applies contrastive learning to points in the scene both across sensor modalities and the temporal domain. We demonstrate that this formulation encourages the models to learn embeddings that are invariant to viewpoint variations and consistent across sensor modalities. We further demonstrate that the embeddings are stable over time as objects move around the scene. This not only provides stable instance masks, but can also provide valuable signals to downstream tasks, such as object tracking. We evaluate our method on the Cityscapes and KITTI-360 datasets. We further conduct a number of ablation studies, demonstrating benefits when applying additional inputs for the contrastive loss.

* 8 pages, 9 figures, 5 tables. Presented at IROS 2022 
Viaarxiv icon

StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving

Jun 02, 2022
Jinkyu Kim, Reza Mahjourian, Scott Ettinger, Mayank Bansal, Brandyn White, Ben Sapp, Dragomir Anguelov

Figure 1 for StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving
Figure 2 for StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving
Figure 3 for StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving
Figure 4 for StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving

We introduce a motion forecasting (behavior prediction) method that meets the latency requirements for autonomous driving in dense urban environments without sacrificing accuracy. A whole-scene sparse input representation allows StopNet to scale to predicting trajectories for hundreds of road agents with reliable latency. In addition to predicting trajectories, our scene encoder lends itself to predicting whole-scene probabilistic occupancy grids, a complementary output representation suitable for busy urban environments. Occupancy grids allow the AV to reason collectively about the behavior of groups of agents without processing their individual trajectories. We demonstrate the effectiveness of our sparse input representation and our model in terms of computation and accuracy over three datasets. We further show that co-training consistent trajectory and occupancy predictions improves upon state-of-the-art performance under standard metrics.

* IEEE International Conference on Robotics and Automation 2022  
Viaarxiv icon

Occupancy Flow Fields for Motion Forecasting in Autonomous Driving

Mar 08, 2022
Reza Mahjourian, Jinkyu Kim, Yuning Chai, Mingxing Tan, Ben Sapp, Dragomir Anguelov

Figure 1 for Occupancy Flow Fields for Motion Forecasting in Autonomous Driving
Figure 2 for Occupancy Flow Fields for Motion Forecasting in Autonomous Driving
Figure 3 for Occupancy Flow Fields for Motion Forecasting in Autonomous Driving
Figure 4 for Occupancy Flow Fields for Motion Forecasting in Autonomous Driving

We propose Occupancy Flow Fields, a new representation for motion forecasting of multiple agents, an important task in autonomous driving. Our representation is a spatio-temporal grid with each grid cell containing both the probability of the cell being occupied by any agent, and a two-dimensional flow vector representing the direction and magnitude of the motion in that cell. Our method successfully mitigates shortcomings of the two most commonly-used representations for motion forecasting: trajectory sets and occupancy grids. Although occupancy grids efficiently represent the probabilistic location of many agents jointly, they do not capture agent motion and lose the agent identities. To this end, we propose a deep learning architecture that generates Occupancy Flow Fields with the help of a new flow trace loss that establishes consistency between the occupancy and flow predictions. We demonstrate the effectiveness of our approach using three metrics on occupancy prediction, motion estimation, and agent ID recovery. In addition, we introduce the problem of predicting speculative agents, which are currently-occluded agents that may appear in the future through dis-occlusion or by entering the field of view. We report experimental results on a large in-house autonomous driving dataset and the public INTERACTION dataset, and show that our model outperforms state-of-the-art models.

* IEEE Robotics and Automation Letters  
Viaarxiv icon

Identifying Driver Interactions via Conditional Behavior Prediction

Apr 20, 2021
Ekaterina Tolstaya, Reza Mahjourian, Carlton Downey, Balakrishnan Vadarajan, Benjamin Sapp, Dragomir Anguelov

Figure 1 for Identifying Driver Interactions via Conditional Behavior Prediction
Figure 2 for Identifying Driver Interactions via Conditional Behavior Prediction
Figure 3 for Identifying Driver Interactions via Conditional Behavior Prediction
Figure 4 for Identifying Driver Interactions via Conditional Behavior Prediction

Interactive driving scenarios, such as lane changes, merges and unprotected turns, are some of the most challenging situations for autonomous driving. Planning in interactive scenarios requires accurately modeling the reactions of other agents to different future actions of the ego agent. We develop end-to-end models for conditional behavior prediction (CBP) that take as an input a query future trajectory for an ego-agent, and predict distributions over future trajectories for other agents conditioned on the query. Leveraging such a model, we develop a general-purpose agent interactivity score derived from probabilistic first principles. The interactivity score allows us to find interesting interactive scenarios for training and evaluating behavior prediction models. We further demonstrate that the proposed score is effective for agent prioritization under computational budget constraints.

Viaarxiv icon

Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics

Jun 12, 2019
Vincent Casser, Soeren Pirk, Reza Mahjourian, Anelia Angelova

Figure 1 for Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics
Figure 2 for Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics
Figure 3 for Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics
Figure 4 for Unsupervised Monocular Depth and Ego-motion Learning with Structure and Semantics

We present an approach which takes advantage of both structure and semantics for unsupervised monocular learning of depth and ego-motion. More specifically, we model the motion of individual objects and learn their 3D motion vector jointly with depth and ego-motion. We obtain more accurate results, especially for challenging dynamic scenes not addressed by previous approaches. This is an extended version of Casser et al. [AAAI'19]. Code and models have been open sourced at https://sites.google.com/corp/view/struct2depth.

* CVPR Workshop on Visual Odometry & Computer Vision Applications Based on Location Clues (VOCVALC), 2019. This is an extension of arXiv:1811.06152: Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos. Thirty-Third AAAI Conference on Artificial Intelligence (AAAI'19) 
Viaarxiv icon

Hierarchical Policy Design for Sample-Efficient Learning of Robot Table Tennis Through Self-Play

Feb 17, 2019
Reza Mahjourian, Risto Miikkulainen, Nevena Lazic, Sergey Levine, Navdeep Jaitly

Figure 1 for Hierarchical Policy Design for Sample-Efficient Learning of Robot Table Tennis Through Self-Play
Figure 2 for Hierarchical Policy Design for Sample-Efficient Learning of Robot Table Tennis Through Self-Play
Figure 3 for Hierarchical Policy Design for Sample-Efficient Learning of Robot Table Tennis Through Self-Play
Figure 4 for Hierarchical Policy Design for Sample-Efficient Learning of Robot Table Tennis Through Self-Play

Training robots with physical bodies requires developing new methods and action representations that allow the learning agents to explore the space of policies efficiently. This work studies sample-efficient learning of complex policies in the context of robot table tennis. It incorporates learning into a hierarchical control framework using a model-free strategy layer (which requires complex reasoning about opponents that is difficult to do in a model-based way), model-based prediction of external objects (which are difficult to control directly with analytic control methods, but governed by learnable and relatively simple laws of physics), and analytic controllers for the robot itself. Human demonstrations are used to train dynamics models, which together with the analytic controller allow any robot that is physically capable to play table tennis without training episodes. Using only about 7,000 demonstrated trajectories, a striking policy can hit ball targets with about 20 cm error. Self-play is used to train cooperative and adversarial strategies on top of model-based striking skills trained from human demonstrations. After only about 24,000 strikes in self-play the agent learns to best exploit the human dynamics models for longer cooperative games. Further experiments demonstrate that more flexible variants of the policy can discover new strikes not demonstrated by humans and achieve higher performance at the expense of lower sample-efficiency. Experiments are carried out in a virtual reality environment using sensory observations that are obtainable in the real world. The high sample-efficiency demonstrated in the evaluations show that the proposed method is suitable for learning directly on physical robots without transfer of models or policies from simulation. Supplementary material available at https://sites.google.com/view/robottabletennis

Viaarxiv icon

Future Segmentation Using 3D Structure

Nov 28, 2018
Suhani Vora, Reza Mahjourian, Soeren Pirk, Anelia Angelova

Figure 1 for Future Segmentation Using 3D Structure
Figure 2 for Future Segmentation Using 3D Structure
Figure 3 for Future Segmentation Using 3D Structure
Figure 4 for Future Segmentation Using 3D Structure

Predicting the future to anticipate the outcome of events and actions is a critical attribute of autonomous agents; particularly for agents which must rely heavily on real time visual data for decision making. Working towards this capability, we address the task of predicting future frame segmentation from a stream of monocular video by leveraging the 3D structure of the scene. Our framework is based on learnable sub-modules capable of predicting pixel-wise scene semantic labels, depth, and camera ego-motion of adjacent frames. We further propose a recurrent neural network based model capable of predicting future ego-motion trajectory as a function of a series of past ego-motion steps. Ultimately, we observe that leveraging 3D structure in the model facilitates successful prediction, achieving state of the art accuracy in future semantic segmentation.

Viaarxiv icon

Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos

Nov 15, 2018
Vincent Casser, Soeren Pirk, Reza Mahjourian, Anelia Angelova

Figure 1 for Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos
Figure 2 for Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos
Figure 3 for Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos
Figure 4 for Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos

Learning to predict scene depth from RGB inputs is a challenging task both for indoor and outdoor robot navigation. In this work we address unsupervised learning of scene depth and robot ego-motion where supervision is provided by monocular videos, as cameras are the cheapest, least restrictive and most ubiquitous sensor for robotics. Previous work in unsupervised image-to-depth learning has established strong baselines in the domain. We propose a novel approach which produces higher quality results, is able to model moving objects and is shown to transfer across data domains, e.g. from outdoors to indoor scenes. The main idea is to introduce geometric structure in the learning process, by modeling the scene and the individual objects; camera ego-motion and object motions are learned from monocular videos as input. Furthermore an online refinement method is introduced to adapt learning on the fly to unknown domains. The proposed approach outperforms all state-of-the-art approaches, including those that handle motion e.g. through learned flow. Our results are comparable in quality to the ones which used stereo as supervision and significantly improve depth prediction on scenes and datasets which contain a lot of object motion. The approach is of practical relevance, as it allows transfer across environments, by transferring models trained on data collected for robot navigation in urban scenes to indoor navigation settings. The code associated with this paper can be found at https://sites.google.com/view/struct2depth.

* Thirty-Third AAAI Conference on Artificial Intelligence (AAAI'19) 
Viaarxiv icon

Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints

Jun 09, 2018
Reza Mahjourian, Martin Wicke, Anelia Angelova

Figure 1 for Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Figure 2 for Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Figure 3 for Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Figure 4 for Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints

We present a novel approach for unsupervised learning of depth and ego-motion from monocular video. Unsupervised learning removes the need for separate supervisory signals (depth or ego-motion ground truth, or multi-view video). Prior work in unsupervised depth learning uses pixel-wise or gradient-based losses, which only consider pixels in small local neighborhoods. Our main contribution is to explicitly consider the inferred 3D geometry of the scene, enforcing consistency of the estimated 3D point clouds and ego-motion across consecutive frames. This is a challenging task and is solved by a novel (approximate) backpropagation algorithm for aligning 3D structures. We combine this novel 3D-based loss with 2D losses based on photometric quality of frame reconstructions using estimated depth and ego-motion from adjacent frames. We also incorporate validity masks to avoid penalizing areas in which no useful information exists. We test our algorithm on the KITTI dataset and on a video dataset captured on an uncalibrated mobile phone camera. Our proposed approach consistently improves depth estimates on both datasets, and outperforms the state-of-the-art for both depth and ego-motion. Because we only require a simple video, learning depth and ego-motion on large and varied datasets becomes possible. We demonstrate this by training on the low quality uncalibrated video dataset and evaluating on KITTI, ranking among top performing prior methods which are trained on KITTI itself.

* Upload CVPR camera-ready 
Viaarxiv icon