Alert button
Picture for Tracy Cai

Tracy Cai

Alert button

Trajectory Prediction using Generative Adversarial Network in Multi-Class Scenarios

Oct 18, 2021
Shilun Li, Tracy Cai, Jiayi Li

Figure 1 for Trajectory Prediction using Generative Adversarial Network in Multi-Class Scenarios
Figure 2 for Trajectory Prediction using Generative Adversarial Network in Multi-Class Scenarios
Figure 3 for Trajectory Prediction using Generative Adversarial Network in Multi-Class Scenarios
Figure 4 for Trajectory Prediction using Generative Adversarial Network in Multi-Class Scenarios

Predicting traffic agents' trajectories is an important task for auto-piloting. Most previous work on trajectory prediction only considers a single class of road agents. We use a sequence-to-sequence model to predict future paths from observed paths and we incorporate class information into the model by concatenating extracted label representations with traditional location inputs. We experiment with both LSTM and transformer encoders and we use generative adversarial network as introduced in Social GAN to learn the multi-modal behavior of traffic agents. We train our model on Stanford Drone dataset which includes 6 classes of road agents and evaluate the impact of different model components on the prediction performance in multi-class scenes.

Viaarxiv icon

Did the Model Change? Efficiently Assessing Machine Learning API Shifts

Jul 29, 2021
Lingjiao Chen, Tracy Cai, Matei Zaharia, James Zou

Figure 1 for Did the Model Change? Efficiently Assessing Machine Learning API Shifts
Figure 2 for Did the Model Change? Efficiently Assessing Machine Learning API Shifts
Figure 3 for Did the Model Change? Efficiently Assessing Machine Learning API Shifts
Figure 4 for Did the Model Change? Efficiently Assessing Machine Learning API Shifts

Machine learning (ML) prediction APIs are increasingly widely used. An ML API can change over time due to model updates or retraining. This presents a key challenge in the usage of the API because it is often not clear to the user if and how the ML model has changed. Model shifts can affect downstream application performance and also create oversight issues (e.g. if consistency is desired). In this paper, we initiate a systematic investigation of ML API shifts. We first quantify the performance shifts from 2020 to 2021 of popular ML APIs from Google, Microsoft, Amazon, and others on a variety of datasets. We identified significant model shifts in 12 out of 36 cases we investigated. Interestingly, we found several datasets where the API's predictions became significantly worse over time. This motivated us to formulate the API shift assessment problem at a more fine-grained level as estimating how the API model's confusion matrix changes over time when the data distribution is constant. Monitoring confusion matrix shifts using standard random sampling can require a large number of samples, which is expensive as each API call costs a fee. We propose a principled adaptive sampling algorithm, MASA, to efficiently estimate confusion matrix shifts. MASA can accurately estimate the confusion matrix shifts in commercial ML APIs using up to 90% fewer samples compared to random sampling. This work establishes ML API shifts as an important problem to study and provides a cost-effective approach to monitor such shifts.

Viaarxiv icon