Alert button
Picture for Xiyang Wang

Xiyang Wang

Alert button

Localization-Guided Track: A Deep Association Multi-Object Tracking Framework Based on Localization Confidence of Detections

Add code
Bookmark button
Alert button
Sep 18, 2023
Ting Meng, Chunyun Fu, Mingguang Huang, Xiyang Wang, Jiawei He, Tao Huang, Wankai Shi

Figure 1 for Localization-Guided Track: A Deep Association Multi-Object Tracking Framework Based on Localization Confidence of Detections
Figure 2 for Localization-Guided Track: A Deep Association Multi-Object Tracking Framework Based on Localization Confidence of Detections
Figure 3 for Localization-Guided Track: A Deep Association Multi-Object Tracking Framework Based on Localization Confidence of Detections
Figure 4 for Localization-Guided Track: A Deep Association Multi-Object Tracking Framework Based on Localization Confidence of Detections
Viaarxiv icon

You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking

Add code
Bookmark button
Alert button
Apr 18, 2023
Xiyang Wang, Jiawei He, Chunyun Fu, Ting Meng, Mingguang Huang

Figure 1 for You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking
Figure 2 for You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking
Figure 3 for You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking
Figure 4 for You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking
Viaarxiv icon

3D Multi-Object Tracking Based on Uncertainty-Guided Data Association

Add code
Bookmark button
Alert button
Mar 03, 2023
Jiawei He, Chunyun Fu, Xiyang Wang

Figure 1 for 3D Multi-Object Tracking Based on Uncertainty-Guided Data Association
Figure 2 for 3D Multi-Object Tracking Based on Uncertainty-Guided Data Association
Figure 3 for 3D Multi-Object Tracking Based on Uncertainty-Guided Data Association
Figure 4 for 3D Multi-Object Tracking Based on Uncertainty-Guided Data Association
Viaarxiv icon

DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association

Add code
Bookmark button
Alert button
Feb 24, 2022
Xiyang Wang, Chunyun Fu, Zhankun Li, Ying Lai, Jiawei He

Figure 1 for DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association
Figure 2 for DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association
Figure 3 for DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association
Figure 4 for DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on Camera-LiDAR Fusion with Deep Association
Viaarxiv icon

Hierarchical View Predictor: Unsupervised 3D Global Feature Learning through Hierarchical Prediction among Unordered Views

Add code
Bookmark button
Alert button
Aug 08, 2021
Zhizhong Han, Xiyang Wang, Yu-Shen Liu, Matthias Zwicker

Figure 1 for Hierarchical View Predictor: Unsupervised 3D Global Feature Learning through Hierarchical Prediction among Unordered Views
Figure 2 for Hierarchical View Predictor: Unsupervised 3D Global Feature Learning through Hierarchical Prediction among Unordered Views
Figure 3 for Hierarchical View Predictor: Unsupervised 3D Global Feature Learning through Hierarchical Prediction among Unordered Views
Figure 4 for Hierarchical View Predictor: Unsupervised 3D Global Feature Learning through Hierarchical Prediction among Unordered Views
Viaarxiv icon

BSTC: A Large-Scale Chinese-English Speech Translation Dataset

Add code
Bookmark button
Alert button
Apr 27, 2021
Ruiqing Zhang, Xiyang Wang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Zhi Li, Haifeng Wang, Ying Chen, Qinfei Li

Figure 1 for BSTC: A Large-Scale Chinese-English Speech Translation Dataset
Figure 2 for BSTC: A Large-Scale Chinese-English Speech Translation Dataset
Figure 3 for BSTC: A Large-Scale Chinese-English Speech Translation Dataset
Figure 4 for BSTC: A Large-Scale Chinese-English Speech Translation Dataset
Viaarxiv icon

Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction

Add code
Bookmark button
Alert button
Jul 30, 2019
Zhizhong Han, Xiyang Wang, Yu-Shen Liu, Matthias Zwicker

Figure 1 for Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction
Figure 2 for Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction
Figure 3 for Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction
Figure 4 for Multi-Angle Point Cloud-VAE: Unsupervised Feature Learning for 3D Point Clouds from Multiple Angles by Joint Self-Reconstruction and Half-to-Half Prediction
Viaarxiv icon

3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention

Add code
Bookmark button
Alert button
May 17, 2019
Zhizhong Han, Xiyang Wang, Chi-Man Vong, Yu-Shen Liu, Matthias Zwicker, C. L. Philip Chen

Figure 1 for 3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention
Figure 2 for 3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention
Figure 3 for 3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention
Figure 4 for 3DViewGraph: Learning Global Features for 3D Shapes from A Graph of Unordered Views with Attention
Viaarxiv icon

Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences

Add code
Bookmark button
Alert button
Nov 07, 2018
Zhizhong Han, Mingyang Shang, Xiyang Wang, Yu-Shen Liu, Matthias Zwicker

Figure 1 for Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences
Figure 2 for Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences
Figure 3 for Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences
Figure 4 for Y^2Seq2Seq: Cross-Modal Representation Learning for 3D Shape and Text by Joint Reconstruction and Prediction of View and Word Sequences
Viaarxiv icon