Alert button
Picture for Sibo Zhang

Sibo Zhang

Alert button

Rate-Splitting Multiple Access: Finite Constellations, Receiver Design, and SIC-free Implementation

May 30, 2023
Sibo Zhang, Bruno Clerckx, David Vargas, Oliver Haffenden, Andrew Murphy

Figure 1 for Rate-Splitting Multiple Access: Finite Constellations, Receiver Design, and SIC-free Implementation
Figure 2 for Rate-Splitting Multiple Access: Finite Constellations, Receiver Design, and SIC-free Implementation
Figure 3 for Rate-Splitting Multiple Access: Finite Constellations, Receiver Design, and SIC-free Implementation
Figure 4 for Rate-Splitting Multiple Access: Finite Constellations, Receiver Design, and SIC-free Implementation

Rate-Splitting Multiple Access (RSMA) has emerged as a novel multiple access technique that enlarges the achievable rate region of Multiple-Input Multiple-Output (MIMO) broadcast channels with linear precoding. In this work, we jointly address three practical but fundamental questions: (1) How to exploit the benefit of RSMA under finite constellations? (2) What are the potential and promising ways to implement RSMA receivers? (3) Can RSMA still retain its superiority in the absence of successive interference cancellers (SIC)? To address these concerns, we first propose low-complexity precoder designs taking finite constellations into account and show that the potential of RSMA is better achieved with such designs than those assuming Gaussian signalling. We then consider some practical receiver designs that can be applied to RSMA. We notice that these receiver designs follow one of two principles: (1) SIC: cancelling upper layer signals before decoding the lower layer and (2) non-SIC: treating upper layer signals as noise when decoding the lower layer. In light of this, we propose to alter the precoder design according to the receiver category. Through link-level simulations, the effectiveness of the proposed precoder and receiver designs are verified. More importantly, we show that it is possible to preserve the superiority of RSMA over Spatial Domain Multiple Access (SDMA), including SDMA with advanced receivers, even without SIC at the receivers. Those results therefore open the door to competitive implementable RSMA strategies for 6G and beyond communications.

* Submitted to IEEE for publication 
Viaarxiv icon

Vision-based Excavator Activity Analysis and Safety Monitoring System

Oct 06, 2021
Sibo Zhang, Liangjun Zhang

Figure 1 for Vision-based Excavator Activity Analysis and Safety Monitoring System
Figure 2 for Vision-based Excavator Activity Analysis and Safety Monitoring System
Figure 3 for Vision-based Excavator Activity Analysis and Safety Monitoring System
Figure 4 for Vision-based Excavator Activity Analysis and Safety Monitoring System

In this paper, we propose an excavator activity analysis and safety monitoring system, leveraging recent advancements in deep learning and computer vision. Our proposed system detects the surrounding environment and the excavators while estimating the poses and actions of the excavators. Compared to previous systems, our method achieves higher accuracy in object detection, pose estimation, and action recognition tasks. In addition, we build an excavator dataset using the Autonomous Excavator System (AES) on the waste disposal recycle scene to demonstrate the effectiveness of our system. We also evaluate our method on a benchmark construction dataset. The experimental results show that the proposed action recognition approach outperforms the state-of-the-art approaches on top-1 accuracy by about 5.18%.

Viaarxiv icon

Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary

Apr 29, 2021
Sibo Zhang, Jiahong Yuan, Miao Liao, Liangjun Zhang

Figure 1 for Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary
Figure 2 for Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary
Figure 3 for Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary
Figure 4 for Text2Video: Text-driven Talking-head Video Synthesis with Phonetic Dictionary

With the advance of deep learning technology, automatic video generation from audio or text has become an emerging and promising research topic. In this paper, we present a novel approach to synthesize video from the text. The method builds a phoneme-pose dictionary and trains a generative adversarial network (GAN) to generate video from interpolated phoneme poses. Compared to audio-driven video generation algorithms, our approach has a number of advantages: 1) It only needs a fraction of the training data used by an audio-driven approach; 2) It is more flexible and not subject to vulnerability due to speaker variation; 3) It significantly reduces the preprocessing, training and inference time. We perform extensive experiments to compare the proposed method with state-of-the-art talking face generation methods on a benchmark dataset and datasets of our own. The results demonstrate the effectiveness and superiority of our approach.

Viaarxiv icon

Personalized Speech2Video with 3D Skeleton Regularization and Expressive Body Poses

Jul 17, 2020
Miao Liao, Sibo Zhang, Peng Wang, Hao Zhu, Ruigang Yang

Figure 1 for Personalized Speech2Video with 3D Skeleton Regularization and Expressive Body Poses
Figure 2 for Personalized Speech2Video with 3D Skeleton Regularization and Expressive Body Poses
Figure 3 for Personalized Speech2Video with 3D Skeleton Regularization and Expressive Body Poses
Figure 4 for Personalized Speech2Video with 3D Skeleton Regularization and Expressive Body Poses

In this paper, we propose a novel approach to convert given speech audio to a photo-realistic speaking video of a specific person, where the output video has synchronized, realistic, and expressive rich body dynamics. We achieve this by first generating 3D skeleton movements from the audio sequence using a recurrent neural network (RNN), and then synthesizing the output video via a conditional generative adversarial network (GAN). To make the skeleton movement realistic and expressive, we embed the knowledge of an articulated 3D human skeleton and a learned dictionary of personal speech iconic gestures into the generation process in both learning and testing pipelines. The former prevents the generation of unreasonable body distortion, while the later helps our model quickly learn meaningful body movement through a few recorded videos. To produce photo-realistic and high-resolution video with motion details, we propose to insert part attention mechanisms in the conditional GAN, where each detailed part, e.g. head and hand, is automatically zoomed in to have their own discriminators. To validate our approach, we collect a dataset with 20 high-quality videos from 1 male and 1 female model reading various documents under different topics. Compared with previous SoTA pipelines handling similar tasks, our approach achieves better results by a user study.

Viaarxiv icon

DVI: Depth Guided Video Inpainting for Autonomous Driving

Jul 17, 2020
Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Wei Li, Ruigang Yang

Figure 1 for DVI: Depth Guided Video Inpainting for Autonomous Driving
Figure 2 for DVI: Depth Guided Video Inpainting for Autonomous Driving
Figure 3 for DVI: Depth Guided Video Inpainting for Autonomous Driving
Figure 4 for DVI: Depth Guided Video Inpainting for Autonomous Driving

To get clear street-view and photo-realistic simulation in autonomous driving, we present an automatic video inpainting algorithm that can remove traffic agents from videos and synthesize missing regions with the guidance of depth/point cloud. By building a dense 3D map from stitched point clouds, frames within a video are geometrically correlated via this common 3D map. In order to fill a target inpainting area in a frame, it is straightforward to transform pixels from other frames into the current one with correct occlusion. Furthermore, we are able to fuse multiple videos through 3D point cloud registration, making it possible to inpaint a target video with multiple source videos. The motivation is to solve the long-time occlusion problem where an occluded area has never been visible in the entire video. To our knowledge, we are the first to fuse multiple videos for video inpainting. To verify the effectiveness of our approach, we build a large inpainting dataset in the real urban road environment with synchronized images and Lidar data including many challenge scenes, e.g., long time occlusion. The experimental results show that the proposed approach outperforms the state-of-the-art approaches for all the criteria, especially the RMSE (Root Mean Squared Error) has been reduced by about 13%.

Viaarxiv icon

CVPR 2019 WAD Challenge on Trajectory Prediction and 3D Perception

Apr 06, 2020
Sibo Zhang, Yuexin Ma, Ruigang Yang, Xin Li, Yanliang Zhu, Deheng Qian, Zetong Yang, Wenjing Zhang, Yuanpei Liu

Figure 1 for CVPR 2019 WAD Challenge on Trajectory Prediction and 3D Perception
Figure 2 for CVPR 2019 WAD Challenge on Trajectory Prediction and 3D Perception
Figure 3 for CVPR 2019 WAD Challenge on Trajectory Prediction and 3D Perception
Figure 4 for CVPR 2019 WAD Challenge on Trajectory Prediction and 3D Perception

This paper reviews the CVPR 2019 challenge on Autonomous Driving. Baidu's Robotics and Autonomous Driving Lab (RAL) providing 150 minutes labeled Trajectory and 3D Perception dataset including about 80k lidar point cloud and 1000km trajectories for urban traffic. The challenge has two tasks in (1) Trajectory Prediction and (2) 3D Lidar Object Detection. There are more than 200 teams submitted results on Leaderboard and more than 1000 participants attended the workshop.

Viaarxiv icon

TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents

Apr 09, 2019
Yuexin Ma, Xinge Zhu, Sibo Zhang, Ruigang Yang, Wenping Wang, Dinesh Manocha

Figure 1 for TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents
Figure 2 for TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents
Figure 3 for TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents
Figure 4 for TrafficPredict: Trajectory Prediction for Heterogeneous Traffic-Agents

To safely and efficiently navigate in complex urban traffic, autonomous vehicles must make responsible predictions in relation to surrounding traffic-agents (vehicles, bicycles, pedestrians, etc.). A challenging and critical task is to explore the movement patterns of different traffic-agents and predict their future trajectories accurately to help the autonomous vehicle make reasonable navigation decision. To solve this problem, we propose a long short-term memory-based (LSTM-based) realtime traffic prediction algorithm, TrafficPredict. Our approach uses an instance layer to learn instances' movements and interactions and has a category layer to learn the similarities of instances belonging to the same type to refine the prediction. In order to evaluate its performance, we collected trajectory datasets in a large city consisting of varying conditions and traffic densities. The dataset includes many challenging scenarios where vehicles, bicycles, and pedestrians move among one another. We evaluate the performance of TrafficPredict on our new dataset and highlight its higher accuracy for trajectory prediction by comparing with prior prediction methods.

* Accepted by AAAI(Oral) 2019 
Viaarxiv icon