Alert button
Picture for David Wong

David Wong

Alert button

TF-GNN: Graph Neural Networks in TensorFlow

Jul 07, 2022
Oleksandr Ferludin, Arno Eigenwillig, Martin Blais, Dustin Zelle, Jan Pfeifer, Alvaro Sanchez-Gonzalez, Sibon Li, Sami Abu-El-Haija, Peter Battaglia, Neslihan Bulut, Jonathan Halcrow, Filipe Miguel Gonçalves de Almeida, Silvio Lattanzi, André Linhares, Brandon Mayer, Vahab Mirrokni, John Palowitch, Mihir Paradkar, Jennifer She, Anton Tsitsulin, Kevin Villela, Lisa Wang, David Wong, Bryan Perozzi

Figure 1 for TF-GNN: Graph Neural Networks in TensorFlow
Figure 2 for TF-GNN: Graph Neural Networks in TensorFlow
Figure 3 for TF-GNN: Graph Neural Networks in TensorFlow
Figure 4 for TF-GNN: Graph Neural Networks in TensorFlow

TensorFlow GNN (TF-GNN) is a scalable library for Graph Neural Networks in TensorFlow. It is designed from the bottom up to support the kinds of rich heterogeneous graph data that occurs in today's information ecosystems. Many production models at Google use TF-GNN and it has been recently released as an open source project. In this paper, we describe the TF-GNN data model, its Keras modeling API, and relevant capabilities such as graph sampling, distributed training, and accelerator support.

Viaarxiv icon

ETA Prediction with Graph Neural Networks in Google Maps

Aug 25, 2021
Austin Derrow-Pinion, Jennifer She, David Wong, Oliver Lange, Todd Hester, Luis Perez, Marc Nunkesser, Seongjae Lee, Xueying Guo, Brett Wiltshire, Peter W. Battaglia, Vishal Gupta, Ang Li, Zhongwen Xu, Alvaro Sanchez-Gonzalez, Yujia Li, Petar Veličković

Figure 1 for ETA Prediction with Graph Neural Networks in Google Maps
Figure 2 for ETA Prediction with Graph Neural Networks in Google Maps
Figure 3 for ETA Prediction with Graph Neural Networks in Google Maps
Figure 4 for ETA Prediction with Graph Neural Networks in Google Maps

Travel-time prediction constitutes a task of high importance in transportation networks, with web mapping services like Google Maps regularly serving vast quantities of travel time queries from users and enterprises alike. Further, such a task requires accounting for complex spatiotemporal interactions (modelling both the topological properties of the road network and anticipating events -- such as rush hours -- that may occur in the future). Hence, it is an ideal target for graph representation learning at scale. Here we present a graph neural network estimator for estimated time of arrival (ETA) which we have deployed in production at Google Maps. While our main architecture consists of standard GNN building blocks, we further detail the usage of training schedule methods such as MetaGradients in order to make our model robust and production-ready. We also provide prescriptive studies: ablating on various architectural decisions and training regimes, and qualitative analyses on real-world situations where our model provides a competitive edge. Our GNN proved powerful when deployed, significantly reducing negative ETA outcomes in several regions compared to the previous production baseline (40+% in cities like Sydney).

* To appear at CIKM 2021 (Applied Research Track). 10 pages, 4 figures 
Viaarxiv icon

Characterization of Multiple 3D LiDARs for Localization and Mapping using Normal Distributions Transform

Apr 03, 2020
Alexander Carballo, Abraham Monrroy, David Wong, Patiphon Narksri, Jacob Lambert, Yuki Kitsukawa, Eijiro Takeuchi, Shinpei Kato, Kazuya Takeda

Figure 1 for Characterization of Multiple 3D LiDARs for Localization and Mapping using Normal Distributions Transform
Figure 2 for Characterization of Multiple 3D LiDARs for Localization and Mapping using Normal Distributions Transform
Figure 3 for Characterization of Multiple 3D LiDARs for Localization and Mapping using Normal Distributions Transform
Figure 4 for Characterization of Multiple 3D LiDARs for Localization and Mapping using Normal Distributions Transform

In this work, we present a detailed comparison of ten different 3D LiDAR sensors, covering a range of manufacturers, models, and laser configurations, for the tasks of mapping and vehicle localization, using as common reference the Normal Distributions Transform (NDT) algorithm implemented in the self-driving open source platform Autoware. LiDAR data used in this study is a subset of our LiDAR Benchmarking and Reference (LIBRE) dataset, captured independently from each sensor, from a vehicle driven on public urban roads multiple times, at different times of the day. In this study, we analyze the performance and characteristics of each LiDAR for the tasks of (1) 3D mapping including an assessment map quality based on mean map entropy, and (2) 6-DOF localization using a ground truth reference map.

* Submitted to IEEE International Conference on Intelligent Transportation Systems(ITSC) 2020 LIBRE dataset is available at https://sites.google.com/g.sp.m.is.nagoya-u.ac.jp/libre-dataset 
Viaarxiv icon

LIBRE: The Multiple 3D LiDAR Dataset

Mar 13, 2020
Alexander Carballo, Jacob Lambert, Abraham Monrroy, David Wong, Patiphon Narksri, Yuki Kitsukawa, Eijiro Takeuchi, Shinpei Kato, Kazuya Takeda

Figure 1 for LIBRE: The Multiple 3D LiDAR Dataset
Figure 2 for LIBRE: The Multiple 3D LiDAR Dataset
Figure 3 for LIBRE: The Multiple 3D LiDAR Dataset
Figure 4 for LIBRE: The Multiple 3D LiDAR Dataset

In this work, we present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 12 different LiDAR sensors, covering a range of manufacturers, models, and laser configurations. Data captured independently from each sensor includes four different environments and configurations: static obstacles placed at known distances and measured from a fixed position within a controlled environment; static obstacles measured from a moving vehicle, captured in a weather chamber where LiDARs were exposed to different conditions (fog, rain, strong light); dynamic objects actively measured from a fixed position by multiple LiDARs mounted side-by-side simultaneously, creating indirect interference conditions; and dynamic traffic objects captured from a vehicle driven on public urban roads multiple times at different times of the day, including data from supporting sensors such as cameras, infrared imaging, and odometry devices. LIBRE will contribute the research community to (1) provide a means for a fair comparison of currently available LiDARs, and (2) facilitate the improvement of existing self-driving vehicles and robotics-related software, in terms of development and tuning of LiDAR-based perception algorithms.

* LIBRE dataset available at https://sites.google.com/g.sp.m.is.nagoya-u.ac.jp/libre-dataset/ Reference video available at https://youtu.be/5S8Za9dQSwY 
Viaarxiv icon