Alert button
Picture for Yanxia Zhang

Yanxia Zhang

Alert button

GAME-UP: Game-Aware Mode Enumeration and Understanding for Trajectory Prediction

May 28, 2023
Justin Lidard, Oswin So, Yanxia Zhang, Jonathan DeCastro, Xiongyi Cui, Xin Huang, Yen-Ling Kuo, John Leonard, Avinash Balachandran, Naomi Leonard, Guy Rosman

Figure 1 for GAME-UP: Game-Aware Mode Enumeration and Understanding for Trajectory Prediction
Figure 2 for GAME-UP: Game-Aware Mode Enumeration and Understanding for Trajectory Prediction
Figure 3 for GAME-UP: Game-Aware Mode Enumeration and Understanding for Trajectory Prediction
Figure 4 for GAME-UP: Game-Aware Mode Enumeration and Understanding for Trajectory Prediction

Interactions between road agents present a significant challenge in trajectory prediction, especially in cases involving multiple agents. Because existing diversity-aware predictors do not account for the interactive nature of multi-agent predictions, they may miss these important interaction outcomes. In this paper, we propose GAME-UP, a framework for trajectory prediction that leverages game-theoretic inverse reinforcement learning to improve coverage of multi-modal predictions. We use a training-time game-theoretic numerical analysis as an auxiliary loss resulting in improved coverage and accuracy without presuming a taxonomy of actions for the agents. We demonstrate our approach on the interactive subset of Waymo Open Motion Dataset, including three subsets involving scenarios with high interaction complexity. Experiment results show that our predictor produces accurate predictions while covering twice as many possible interactions versus a baseline model.

* 10 pages, 6 figures 
Viaarxiv icon

Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression

Dec 07, 2021
Nikos Arechiga, Francine Chen, Yan-Ying Chen, Yanxia Zhang, Rumen Iliev, Heishiro Toyoda, Kent Lyons

Figure 1 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression
Figure 2 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression
Figure 3 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression
Figure 4 for Accelerating Understanding of Scientific Experiments with End to End Symbolic Regression

We consider the problem of learning free-form symbolic expressions from raw data, such as that produced by an experiment in any scientific domain. Accurate and interpretable models of scientific phenomena are the cornerstone of scientific research. Simple yet interpretable models, such as linear or logistic regression and decision trees often lack predictive accuracy. Alternatively, accurate blackbox models such as deep neural networks provide high predictive accuracy, but do not readily admit human understanding in a way that would enrich the scientific theory of the phenomenon. Many great breakthroughs in science revolve around the development of parsimonious equational models with high predictive accuracy, such as Newton's laws, universal gravitation, and Maxwell's equations. Previous work on automating the search of equational models from data combine domain-specific heuristics as well as computationally expensive techniques, such as genetic programming and Monte-Carlo search. We develop a deep neural network (MACSYMA) to address the symbolic regression problem as an end-to-end supervised learning problem. MACSYMA can generate symbolic expressions that describe a dataset. The computational complexity of the task is reduced to the feedforward computation of a neural network. We train our neural network on a synthetic dataset consisting of data tables of varying length and varying levels of noise, for which the neural network must learn to produce the correct symbolic expression token by token. Finally, we validate our technique by running on a public dataset from behavioral science.

Viaarxiv icon

Using Sensory Time-cue to enable Unsupervised Multimodal Meta-learning

Sep 16, 2020
Qiong Liu, Yanxia Zhang

Figure 1 for Using Sensory Time-cue to enable Unsupervised Multimodal Meta-learning
Figure 2 for Using Sensory Time-cue to enable Unsupervised Multimodal Meta-learning
Figure 3 for Using Sensory Time-cue to enable Unsupervised Multimodal Meta-learning
Figure 4 for Using Sensory Time-cue to enable Unsupervised Multimodal Meta-learning

As data from IoT (Internet of Things) sensors become ubiquitous, state-of-the-art machine learning algorithms face many challenges on directly using sensor data. To overcome these challenges, methods must be designed to learn directly from sensors without manual annotations. This paper introduces Sensory Time-cue for Unsupervised Meta-learning (STUM). Different from traditional learning approaches that either heavily depend on labels or on time-independent feature extraction assumptions, such as Gaussian distribution features, the STUM system uses time relation of inputs to guide the feature space formation within and across modalities. The fact that STUM learns from a variety of small tasks may put this method in the camp of Meta-Learning. Different from existing Meta-Learning approaches, STUM learning tasks are composed within and across multiple modalities based on time-cue co-exist with the IoT streaming data. In an audiovisual learning example, because consecutive visual frames usually comprise the same object, this approach provides a unique way to organize features from the same object together. The same method can also organize visual object features with the object's spoken-name features together if the spoken name is presented with the object at about the same time. This cross-modality feature organization may further help the organization of visual features that belong to similar objects but acquired at different location and time. Promising results are achieved through evaluations.

Viaarxiv icon