Alert button
Picture for Haleh Damirchi

Haleh Damirchi

Alert button

Context-aware Pedestrian Trajectory Prediction with Multimodal Transformer

Jul 07, 2023
Haleh Damirchi, Michael Greenspan, Ali Etemad

Figure 1 for Context-aware Pedestrian Trajectory Prediction with Multimodal Transformer
Figure 2 for Context-aware Pedestrian Trajectory Prediction with Multimodal Transformer
Figure 3 for Context-aware Pedestrian Trajectory Prediction with Multimodal Transformer
Figure 4 for Context-aware Pedestrian Trajectory Prediction with Multimodal Transformer

We propose a novel solution for predicting future trajectories of pedestrians. Our method uses a multimodal encoder-decoder transformer architecture, which takes as input both pedestrian locations and ego-vehicle speeds. Notably, our decoder predicts the entire future trajectory in a single-pass and does not perform one-step-ahead prediction, which makes the method effective for embedded edge deployment. We perform detailed experiments and evaluate our method on two popular datasets, PIE and JAAD. Quantitative results demonstrate the superiority of our proposed model over the current state-of-the-art, which consistently achieves the lowest error for 3 time horizons of 0.5, 1.0 and 1.5 seconds. Moreover, the proposed method is significantly faster than the state-of-the-art for the two datasets of PIE and JAAD. Lastly, ablation experiments demonstrate the impact of the key multimodal configuration of our method.

Viaarxiv icon

Multiscale Crowd Counting and Localization By Multitask Point Supervision

Feb 21, 2022
Mohsen Zand, Haleh Damirchi, Andrew Farley, Mahdiyar Molahasani, Michael Greenspan, Ali Etemad

Figure 1 for Multiscale Crowd Counting and Localization By Multitask Point Supervision
Figure 2 for Multiscale Crowd Counting and Localization By Multitask Point Supervision
Figure 3 for Multiscale Crowd Counting and Localization By Multitask Point Supervision
Figure 4 for Multiscale Crowd Counting and Localization By Multitask Point Supervision

We propose a multitask approach for crowd counting and person localization in a unified framework. As the detection and localization tasks are well-correlated and can be jointly tackled, our model benefits from a multitask solution by learning multiscale representations of encoded crowd images, and subsequently fusing them. In contrast to the relatively more popular density-based methods, our model uses point supervision to allow for crowd locations to be accurately identified. We test our model on two popular crowd counting datasets, ShanghaiTech A and B, and demonstrate that our method achieves strong results on both counting and localization tasks, with MSE measures of 110.7 and 15.0 for crowd counting and AP measures of 0.71 and 0.75 for localization, on ShanghaiTech A and B respectively. Our detailed ablation experiments show the impact of our multiscale approach as well as the effectiveness of the fusion module embedded in our network. Our code is available at: https://github.com/RCVLab-AiimLab/crowd_counting.

* 4 pages + references, 3 figures, 2 tables, Accepted by ICASSP 2022 Conference 
Viaarxiv icon