Alert button
Picture for Yanhua Li

Yanhua Li

Alert button

STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID-19

Jan 20, 2023
Han Bao, Xun Zhou, Yiqun Xie, Yanhua Li, Xiaowei Jia

Figure 1 for STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID-19
Figure 2 for STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID-19
Figure 3 for STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID-19
Figure 4 for STORM-GAN: Spatio-Temporal Meta-GAN for Cross-City Estimation of Human Mobility Responses to COVID-19

Human mobility estimation is crucial during the COVID-19 pandemic due to its significant guidance for policymakers to make non-pharmaceutical interventions. While deep learning approaches outperform conventional estimation techniques on tasks with abundant training data, the continuously evolving pandemic poses a significant challenge to solving this problem due to data nonstationarity, limited observations, and complex social contexts. Prior works on mobility estimation either focus on a single city or lack the ability to model the spatio-temporal dependencies across cities and time periods. To address these issues, we make the first attempt to tackle the cross-city human mobility estimation problem through a deep meta-generative framework. We propose a Spatio-Temporal Meta-Generative Adversarial Network (STORM-GAN) model that estimates dynamic human mobility responses under a set of social and policy conditions related to COVID-19. Facilitated by a novel spatio-temporal task-based graph (STTG) embedding, STORM-GAN is capable of learning shared knowledge from a spatio-temporal distribution of estimation tasks and quickly adapting to new cities and time periods with limited training samples. The STTG embedding component is designed to capture the similarities among cities to mitigate cross-task heterogeneity. Experimental results on real-world data show that the proposed approach can greatly improve estimation performance and out-perform baselines.

* Accepted at the 22nd IEEE International Conference on Data Mining (ICDM 2022) Full Paper 
Viaarxiv icon

Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models

Dec 01, 2022
Qiong Wu, Jian Li, Zhenming Liu, Yanhua Li, Mihai Cucuringu

Figure 1 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models
Figure 2 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models
Figure 3 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models
Figure 4 for Symphony in the Latent Space: Provably Integrating High-dimensional Techniques with Non-linear Machine Learning Models

This paper revisits building machine learning algorithms that involve interactions between entities, such as those between financial assets in an actively managed portfolio, or interactions between users in a social network. Our goal is to forecast the future evolution of ensembles of multivariate time series in such applications (e.g., the future return of a financial asset or the future popularity of a Twitter account). Designing ML algorithms for such systems requires addressing the challenges of high-dimensional interactions and non-linearity. Existing approaches usually adopt an ad-hoc approach to integrating high-dimensional techniques into non-linear models and recent studies have shown these approaches have questionable efficacy in time-evolving interacting systems. To this end, we propose a novel framework, which we dub as the additive influence model. Under our modeling assumption, we show that it is possible to decouple the learning of high-dimensional interactions from the learning of non-linear feature interactions. To learn the high-dimensional interactions, we leverage kernel-based techniques, with provable guarantees, to embed the entities in a low-dimensional latent space. To learn the non-linear feature-response interactions, we generalize prominent machine learning techniques, including designing a new statistically sound non-parametric method and an ensemble learning algorithm optimized for vector regressions. Extensive experiments on two common applications demonstrate that our new algorithms deliver significantly stronger forecasting power compared to standard and recently proposed methods.

* Association for the Advancement of Artificial Intelligence 2023  
* 24 pages 
Viaarxiv icon

EgoSpeed-Net: Forecasting Speed-Control in Driver Behavior from Egocentric Video Data

Sep 27, 2022
Yichen Ding, Ziming Zhang, Yanhua Li, Xun Zhou

Figure 1 for EgoSpeed-Net: Forecasting Speed-Control in Driver Behavior from Egocentric Video Data
Figure 2 for EgoSpeed-Net: Forecasting Speed-Control in Driver Behavior from Egocentric Video Data
Figure 3 for EgoSpeed-Net: Forecasting Speed-Control in Driver Behavior from Egocentric Video Data
Figure 4 for EgoSpeed-Net: Forecasting Speed-Control in Driver Behavior from Egocentric Video Data

Speed-control forecasting, a challenging problem in driver behavior analysis, aims to predict the future actions of a driver in controlling vehicle speed such as braking or acceleration. In this paper, we try to address this challenge solely using egocentric video data, in contrast to the majority of works in the literature using either third-person view data or extra vehicle sensor data such as GPS, or both. To this end, we propose a novel graph convolutional network (GCN) based network, namely, EgoSpeed-Net. We are motivated by the fact that the position changes of objects over time can provide us very useful clues for forecasting the speed change in future. We first model the spatial relations among the objects from each class, frame by frame, using fully-connected graphs, on top of which GCNs are applied for feature extraction. Then we utilize a long short-term memory network to fuse such features per class over time into a vector, concatenate such vectors and forecast a speed-control action using a multilayer perceptron classifier. We conduct extensive experiments on the Honda Research Institute Driving Dataset and demonstrate the superior performance of EgoSpeed-Net.

* In Proceedings of the 30th ACM SIGSPATIAL, International Conference on Advances in Geographic Information Systems (2022) [accepted as a full paper] 
Viaarxiv icon

HintNet: Hierarchical Knowledge Transfer Networks for Traffic Accident Forecasting on Heterogeneous Spatio-Temporal Data

Mar 07, 2022
Bang An, Amin Vahedian, Xun Zhou, W. Nick Street, Yanhua Li

Figure 1 for HintNet: Hierarchical Knowledge Transfer Networks for Traffic Accident Forecasting on Heterogeneous Spatio-Temporal Data
Figure 2 for HintNet: Hierarchical Knowledge Transfer Networks for Traffic Accident Forecasting on Heterogeneous Spatio-Temporal Data
Figure 3 for HintNet: Hierarchical Knowledge Transfer Networks for Traffic Accident Forecasting on Heterogeneous Spatio-Temporal Data
Figure 4 for HintNet: Hierarchical Knowledge Transfer Networks for Traffic Accident Forecasting on Heterogeneous Spatio-Temporal Data

Traffic accident forecasting is a significant problem for transportation management and public safety. However, this problem is challenging due to the spatial heterogeneity of the environment and the sparsity of accidents in space and time. The occurrence of traffic accidents is affected by complex dependencies among spatial and temporal features. Recent traffic accident prediction methods have attempted to use deep learning models to improve accuracy. However, most of these methods either focus on small-scale and homogeneous areas such as populous cities or simply use sliding-window-based ensemble methods, which are inadequate to handle heterogeneity in large regions. To address these limitations, this paper proposes a novel Hierarchical Knowledge Transfer Network (HintNet) model to better capture irregular heterogeneity patterns. HintNet performs a multi-level spatial partitioning to separate sub-regions with different risks and learns a deep network model for each level using spatio-temporal and graph convolutions. Through knowledge transfer across levels, HintNet archives both higher accuracy and higher training efficiency. Extensive experiments on a real-world accident dataset from the state of Iowa demonstrate that HintNet outperforms the state-of-the-art methods on spatially heterogeneous and large-scale areas.

* 9 pages, 2022 SIAM International Conference on Data Mining 
Viaarxiv icon

$f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation Learning

Oct 02, 2020
Xin Zhang, Yanhua Li, Ziming Zhang, Zhi-Li Zhang

Figure 1 for $f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation Learning
Figure 2 for $f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation Learning
Figure 3 for $f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation Learning
Figure 4 for $f$-GAIL: Learning $f$-Divergence for Generative Adversarial Imitation Learning

Imitation learning (IL) aims to learn a policy from expert demonstrations that minimizes the discrepancy between the learner and expert behaviors. Various imitation learning algorithms have been proposed with different pre-determined divergences to quantify the discrepancy. This naturally gives rise to the following question: Given a set of expert demonstrations, which divergence can recover the expert policy more accurately with higher data efficiency? In this work, we propose $f$-GAIL, a new generative adversarial imitation learning (GAIL) model, that automatically learns a discrepancy measure from the $f$-divergence family as well as a policy capable of producing expert-like behaviors. Compared with IL baselines with various predefined divergence measures, $f$-GAIL learns better policies with higher data efficiency in six physics-based control tasks.

Viaarxiv icon

BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation

Aug 05, 2020
Sirui Wang, Yuwei Tu, Qiong Wu, Adam Hare, Zhenming Liu, Christopher G. Brinton, Yanhua Li

Figure 1 for BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation
Figure 2 for BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation
Figure 3 for BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation
Figure 4 for BATS: A Spectral Biclustering Approach to Single Document Topic Modeling and Segmentation

Existing topic modeling and text segmentation methodologies generally require large datasets for training, limiting their capabilities when only small collections of text are available. In this work, we reexamine the inter-related problems of "topic identification" and "text segmentation" for sparse document learning, when there is a single new text of interest. In developing a methodology to handle single documents, we face two major challenges. First is sparse information: with access to only one document, we cannot train traditional topic models or deep learning algorithms. Second is significant noise: a considerable portion of words in any single document will produce only noise and not help discern topics or segments. To tackle these issues, we design an unsupervised, computationally efficient methodology called BATS: Biclustering Approach to Topic modeling and Segmentation. BATS leverages three key ideas to simultaneously identify topics and segment text: (i) a new mechanism that uses word order information to reduce sample complexity, (ii) a statistically sound graph-based biclustering technique that identifies latent structures of words and sentences, and (iii) a collection of effective heuristics that remove noise words and award important words to further improve performance. Experiments on four datasets show that our approach outperforms several state-of-the-art baselines when considering topic coherence, topic diversity, segmentation, and runtime comparison metrics.

Viaarxiv icon

Reward Advancement: Transforming Policy under Maximum Causal Entropy Principle

Jul 11, 2019
Guojun Wu, Yanhua Li, Zhenming Liu, Jie Bao, Yu Zheng, Jieping Ye, Jun Luo

Figure 1 for Reward Advancement: Transforming Policy under Maximum Causal Entropy Principle
Figure 2 for Reward Advancement: Transforming Policy under Maximum Causal Entropy Principle
Figure 3 for Reward Advancement: Transforming Policy under Maximum Causal Entropy Principle

Many real-world human behaviors can be characterized as a sequential decision making processes, such as urban travelers choices of transport modes and routes (Wu et al. 2017). Differing from choices controlled by machines, which in general follows perfect rationality to adopt the policy with the highest reward, studies have revealed that human agents make sub-optimal decisions under bounded rationality (Tao, Rohde, and Corcoran 2014). Such behaviors can be modeled using maximum causal entropy (MCE) principle (Ziebart 2010). In this paper, we define and investigate a general reward trans-formation problem (namely, reward advancement): Recovering the range of additional reward functions that transform the agent's policy from original policy to a predefined target policy under MCE principle. We show that given an MDP and a target policy, there are infinite many additional reward functions that can achieve the desired policy transformation. Moreover, we propose an algorithm to further extract the additional rewards with minimum "cost" to implement the policy transformation.

Viaarxiv icon

Adaptive Reduced Rank Regression

May 28, 2019
Qiong Wu, Felix Ming Fai Wong, Zhenming Liu, Yanhua Li, Varun Kanade

Figure 1 for Adaptive Reduced Rank Regression
Figure 2 for Adaptive Reduced Rank Regression
Figure 3 for Adaptive Reduced Rank Regression
Figure 4 for Adaptive Reduced Rank Regression

Low rank regression has proven to be useful in a wide range of forecasting problems. However, in settings with a low signal-to-noise ratio, it is known to suffer from severe overfitting. This paper studies the reduced rank regression problem and presents algorithms with provable generalization guarantees. We use adaptive hard rank-thresholding in two different parts of the data analysis pipeline. First, we consider a low rank projection of the data to eliminate the components that are most likely to be noisy. Second, we perform a standard multivariate linear regression estimator on the data obtained in the first step, and subsequently consider a low-rank projection of the obtained regression matrix. Both thresholding is performed in a data-driven manner and is required to prevent severe overfitting as our lower bounds show. Experimental results show that our approach either outperforms or is competitive with existing baselines.

* 27 pages 
Viaarxiv icon