Alert button
Picture for Chao Zhu

Chao Zhu

Alert button

Unsupervised Multi-view Pedestrian Detection

May 21, 2023
Mengyin Liu, Chao Zhu, Shiqi Ren, Xu-Cheng Yin

Figure 1 for Unsupervised Multi-view Pedestrian Detection
Figure 2 for Unsupervised Multi-view Pedestrian Detection
Figure 3 for Unsupervised Multi-view Pedestrian Detection
Figure 4 for Unsupervised Multi-view Pedestrian Detection

With the prosperity of the video surveillance, multiple visual sensors have been applied for an accurate localization of pedestrians in a specific area, which facilitate various applications like intelligent safety or new retailing. However, previous methods rely on the supervision from the human annotated pedestrian positions in every video frame and camera view, which is a heavy burden in addition to the necessary camera calibration and synchronization. Therefore, we propose in this paper an Unsupervised Multi-view Pedestrian Detection approach (UMPD) to eliminate the need of annotations to learn a multi-view pedestrian detector. 1) Firstly, Semantic-aware Iterative Segmentation (SIS) is proposed to extract discriminative visual representations of the input images from different camera views via an unsupervised pretrained model, then convert them into 2D segments of pedestrians, based on our proposed iterative Principal Component Analysis and the zero-shot semantic classes from the vision-language pretrained models. 2) Secondly, we propose Vertical-aware Differential Rendering (VDR) to not only learn the densities and colors of 3D voxels by the masks of SIS, images and camera poses, but also constraint the voxels to be vertical towards the ground plane, following the physical characteristics of pedestrians. 3) Thirdly, the densities of 3D voxels learned by VDR are projected onto Bird-Eyes-View as the final detection results. Extensive experiments on popular multi-view pedestrian detection benchmarks, i.e., Wildtrack and MultiviewX, show that our proposed UMPD approach, as the first unsupervised method to our best knowledge, performs competitively with the previous state-of-the-art supervised techniques. Code will be available.

Viaarxiv icon

VLPD: Context-Aware Pedestrian Detection via Vision-Language Semantic Self-Supervision

Apr 06, 2023
Mengyin Liu, Jie Jiang, Chao Zhu, Xu-Cheng Yin

Figure 1 for VLPD: Context-Aware Pedestrian Detection via Vision-Language Semantic Self-Supervision
Figure 2 for VLPD: Context-Aware Pedestrian Detection via Vision-Language Semantic Self-Supervision
Figure 3 for VLPD: Context-Aware Pedestrian Detection via Vision-Language Semantic Self-Supervision
Figure 4 for VLPD: Context-Aware Pedestrian Detection via Vision-Language Semantic Self-Supervision

Detecting pedestrians accurately in urban scenes is significant for realistic applications like autonomous driving or video surveillance. However, confusing human-like objects often lead to wrong detections, and small scale or heavily occluded pedestrians are easily missed due to their unusual appearances. To address these challenges, only object regions are inadequate, thus how to fully utilize more explicit and semantic contexts becomes a key problem. Meanwhile, previous context-aware pedestrian detectors either only learn latent contexts with visual clues, or need laborious annotations to obtain explicit and semantic contexts. Therefore, we propose in this paper a novel approach via Vision-Language semantic self-supervision for context-aware Pedestrian Detection (VLPD) to model explicitly semantic contexts without any extra annotations. Firstly, we propose a self-supervised Vision-Language Semantic (VLS) segmentation method, which learns both fully-supervised pedestrian detection and contextual segmentation via self-generated explicit labels of semantic classes by vision-language models. Furthermore, a self-supervised Prototypical Semantic Contrastive (PSC) learning method is proposed to better discriminate pedestrians and other classes, based on more explicit and semantic contexts obtained from VLS. Extensive experiments on popular benchmarks show that our proposed VLPD achieves superior performances over the previous state-of-the-arts, particularly under challenging circumstances like small scale and heavy occlusion. Code is available at https://github.com/lmy98129/VLPD.

* Accepted by CVPR 2023 
Viaarxiv icon

Boosting Video Super Resolution with Patch-Based Temporal Redundancy Optimization

Jul 18, 2022
Yuhao Huang, Hang Dong, Jinshan Pan, Chao Zhu, Yu Guo, Ding Liu, Lean Fu, Fei Wang

Figure 1 for Boosting Video Super Resolution with Patch-Based Temporal Redundancy Optimization
Figure 2 for Boosting Video Super Resolution with Patch-Based Temporal Redundancy Optimization
Figure 3 for Boosting Video Super Resolution with Patch-Based Temporal Redundancy Optimization
Figure 4 for Boosting Video Super Resolution with Patch-Based Temporal Redundancy Optimization

The success of existing video super-resolution (VSR) algorithms stems mainly exploiting the temporal information from the neighboring frames. However, none of these methods have discussed the influence of the temporal redundancy in the patches with stationary objects and background and usually use all the information in the adjacent frames without any discrimination. In this paper, we observe that the temporal redundancy will bring adverse effect to the information propagation,which limits the performance of the most existing VSR methods. Motivated by this observation, we aim to improve existing VSR algorithms by handling the temporal redundancy patches in an optimized manner. We develop two simple yet effective plug and play methods to improve the performance of existing local and non-local propagation-based VSR algorithms on widely-used public videos. For more comprehensive evaluating the robustness and performance of existing VSR algorithms, we also collect a new dataset which contains a variety of public videos as testing set. Extensive evaluations show that the proposed methods can significantly improve the performance of existing VSR methods on the collected videos from wild scenarios while maintain their performance on existing commonly used datasets. The code is available at https://github.com/HYHsimon/Boosted-VSR.

Viaarxiv icon

Boosting Multi-Modal E-commerce Attribute Value Extraction via Unified Learning Scheme and Dynamic Range Minimization

Jul 15, 2022
Mengyin Liu, Chao Zhu, Hongyu Gao, Weibo Gu, Hongfa Wang, Wei Liu, Xu-cheng Yin

Figure 1 for Boosting Multi-Modal E-commerce Attribute Value Extraction via Unified Learning Scheme and Dynamic Range Minimization
Figure 2 for Boosting Multi-Modal E-commerce Attribute Value Extraction via Unified Learning Scheme and Dynamic Range Minimization
Figure 3 for Boosting Multi-Modal E-commerce Attribute Value Extraction via Unified Learning Scheme and Dynamic Range Minimization
Figure 4 for Boosting Multi-Modal E-commerce Attribute Value Extraction via Unified Learning Scheme and Dynamic Range Minimization

With the prosperity of e-commerce industry, various modalities, e.g., vision and language, are utilized to describe product items. It is an enormous challenge to understand such diversified data, especially via extracting the attribute-value pairs in text sequences with the aid of helpful image regions. Although a series of previous works have been dedicated to this task, there remain seldomly investigated obstacles that hinder further improvements: 1) Parameters from up-stream single-modal pretraining are inadequately applied, without proper jointly fine-tuning in a down-stream multi-modal task. 2) To select descriptive parts of images, a simple late fusion is widely applied, regardless of priori knowledge that language-related information should be encoded into a common linguistic embedding space by stronger encoders. 3) Due to diversity across products, their attribute sets tend to vary greatly, but current approaches predict with an unnecessary maximal range and lead to more potential false positives. To address these issues, we propose in this paper a novel approach to boost multi-modal e-commerce attribute value extraction via unified learning scheme and dynamic range minimization: 1) Firstly, a unified scheme is designed to jointly train a multi-modal task with pretrained single-modal parameters. 2) Secondly, a text-guided information range minimization method is proposed to adaptively encode descriptive parts of each modality into an identical space with a powerful pretrained linguistic model. 3) Moreover, a prototype-guided attribute range minimization method is proposed to first determine the proper attribute set of the current product, and then select prototypes to guide the prediction of the chosen attributes. Experiments on the popular multi-modal e-commerce benchmarks show that our approach achieves superior performance over the other state-of-the-art techniques.

Viaarxiv icon

WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic Segmentation for Lung Adenocarcinoma

Apr 14, 2022
Chu Han, Xipeng Pan, Lixu Yan, Huan Lin, Bingbing Li, Su Yao, Shanshan Lv, Zhenwei Shi, Jinhai Mai, Jiatai Lin, Bingchao Zhao, Zeyan Xu, Zhizhen Wang, Yumeng Wang, Yuan Zhang, Huihui Wang, Chao Zhu, Chunhui Lin, Lijian Mao, Min Wu, Luwen Duan, Jingsong Zhu, Dong Hu, Zijie Fang, Yang Chen, Yongbing Zhang, Yi Li, Yiwen Zou, Yiduo Yu, Xiaomeng Li, Haiming Li, Yanfen Cui, Guoqiang Han, Yan Xu, Jun Xu, Huihua Yang, Chunming Li, Zhenbing Liu, Cheng Lu, Xin Chen, Changhong Liang, Qingling Zhang, Zaiyi Liu

Figure 1 for WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic Segmentation for Lung Adenocarcinoma
Figure 2 for WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic Segmentation for Lung Adenocarcinoma
Figure 3 for WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic Segmentation for Lung Adenocarcinoma
Figure 4 for WSSS4LUAD: Grand Challenge on Weakly-supervised Tissue Semantic Segmentation for Lung Adenocarcinoma

Lung cancer is the leading cause of cancer death worldwide, and adenocarcinoma (LUAD) is the most common subtype. Exploiting the potential value of the histopathology images can promote precision medicine in oncology. Tissue segmentation is the basic upstream task of histopathology image analysis. Existing deep learning models have achieved superior segmentation performance but require sufficient pixel-level annotations, which is time-consuming and expensive. To enrich the label resources of LUAD and to alleviate the annotation efforts, we organize this challenge WSSS4LUAD to call for the outstanding weakly-supervised semantic segmentation (WSSS) techniques for histopathology images of LUAD. Participants have to design the algorithm to segment tumor epithelial, tumor-associated stroma and normal tissue with only patch-level labels. This challenge includes 10,091 patch-level annotations (the training set) and over 130 million labeled pixels (the validation and test sets), from 87 WSIs (67 from GDPH, 20 from TCGA). All the labels were generated by a pathologist-in-the-loop pipeline with the help of AI models and checked by the label review board. Among 532 registrations, 28 teams submitted the results in the test phase with over 1,000 submissions. Finally, the first place team achieved mIoU of 0.8413 (tumor: 0.8389, stroma: 0.7931, normal: 0.8919). According to the technical reports of the top-tier teams, CAM is still the most popular approach in WSSS. Cutmix data augmentation has been widely adopted to generate more reliable samples. With the success of this challenge, we believe that WSSS approaches with patch-level annotations can be a complement to the traditional pixel annotations while reducing the annotation efforts. The entire dataset has been released to encourage more researches on computational pathology in LUAD and more novel WSSS techniques.

Viaarxiv icon

Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring

Dec 09, 2021
Chao Zhu, Hang Dong, Jinshan Pan, Boyang Liang, Yuhao Huang, Lean Fu, Fei Wang

Figure 1 for Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring
Figure 2 for Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring
Figure 3 for Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring
Figure 4 for Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring

The success of the state-of-the-art video deblurring methods stems mainly from implicit or explicit estimation of alignment among the adjacent frames for latent video restoration. However, due to the influence of the blur effect, estimating the alignment information from the blurry adjacent frames is not a trivial task. Inaccurate estimations will interfere the following frame restoration. Instead of estimating alignment information, we propose a simple and effective deep Recurrent Neural Network with Multi-scale Bi-directional Propagation (RNN-MBP) to effectively propagate and gather the information from unaligned neighboring frames for better video deblurring. Specifically, we build a Multi-scale Bi-directional Propagation~(MBP) module with two U-Net RNN cells which can directly exploit the inter-frame information from unaligned neighboring hidden states by integrating them in different scales. Moreover, to better evaluate the proposed algorithm and existing state-of-the-art methods on real-world blurry scenes, we also create a Real-World Blurry Video Dataset (RBVD) by a well-designed Digital Video Acquisition System (DVAS) and use it as the training and evaluation dataset. Extensive experimental results demonstrate that the proposed RBVD dataset effectively improves the performance of existing algorithms on real-world blurry videos, and the proposed algorithm performs favorably against the state-of-the-art methods on three typical benchmarks. The code is available at https://github.com/XJTU-CVLAB-LOWLEVEL/RNN-MBP.

* Accepted by AAAI-2022 
Viaarxiv icon

SiamPolar: Semi-supervised Realtime Video Object Segmentation with Polar Representation

Oct 27, 2021
Yaochen Li, Yuhui Hong, Yonghong Song, Chao Zhu, Ying Zhang, Ruihao Wang

Figure 1 for SiamPolar: Semi-supervised Realtime Video Object Segmentation with Polar Representation
Figure 2 for SiamPolar: Semi-supervised Realtime Video Object Segmentation with Polar Representation
Figure 3 for SiamPolar: Semi-supervised Realtime Video Object Segmentation with Polar Representation
Figure 4 for SiamPolar: Semi-supervised Realtime Video Object Segmentation with Polar Representation

Video object segmentation (VOS) is an essential part of autonomous vehicle navigation. The real-time speed is very important for the autonomous vehicle algorithms along with the accuracy metric. In this paper, we propose a semi-supervised real-time method based on the Siamese network using a new polar representation. The input of bounding boxes is initialized rather than the object masks, which are applied to the video object detection tasks. The polar representation could reduce the parameters for encoding masks with subtle accuracy loss so that the algorithm speed can be improved significantly. An asymmetric siamese network is also developed to extract the features from different spatial scales. Moreover, the peeling convolution is proposed to reduce the antagonism among the branches of the polar head. The repeated cross-correlation and semi-FPN are designed based on this idea. The experimental results on the DAVIS-2016 dataset and other public datasets demonstrate the effectiveness of the proposed method.

* Neurocomputing, Volume 467, 7 January 2022, Pages 491-503  
* 11 pages, 11 figures, journal 
Viaarxiv icon

Mutual-Supervised Feature Modulation Network for Occluded Pedestrian Detection

Oct 21, 2020
Ye He, Chao Zhu, Xu-Cheng Yin

Figure 1 for Mutual-Supervised Feature Modulation Network for Occluded Pedestrian Detection
Figure 2 for Mutual-Supervised Feature Modulation Network for Occluded Pedestrian Detection
Figure 3 for Mutual-Supervised Feature Modulation Network for Occluded Pedestrian Detection
Figure 4 for Mutual-Supervised Feature Modulation Network for Occluded Pedestrian Detection

State-of-the-art pedestrian detectors have achieved significant progress on non-occluded pedestrians, yet they are still struggling under heavy occlusions. The recent occlusion handling strategy of popular two-stage approaches is to build a two-branch architecture with the help of additional visible body annotations. Nonetheless, these methods still have some weaknesses. Either the two branches are trained independently with only score-level fusion, which cannot guarantee the detectors to learn robust enough pedestrian features. Or the attention mechanisms are exploited to only emphasize on the visible body features. However, the visible body features of heavily occluded pedestrians are concentrated on a relatively small area, which will easily cause missing detections. To address the above issues, we propose in this paper a novel Mutual-Supervised Feature Modulation (MSFM) network, to better handle occluded pedestrian detection. The key MSFM module in our network calculates the similarity loss of full body boxes and visible body boxes corresponding to the same pedestrian so that the full-body detector could learn more complete and robust pedestrian features with the assist of contextual features from the occluding parts. To facilitate the MSFM module, we also propose a novel two-branch architecture, consisting of a standard full body detection branch and an extra visible body classification branch. These two branches are trained in a mutual-supervised way with full body annotations and visible body annotations, respectively. To verify the effectiveness of our proposed method, extensive experiments are conducted on two challenging pedestrian datasets: Caltech and CityPersons, and our approach achieves superior performance compared to other state-of-the-art methods on both datasets, especially in heavy occlusion case.

* Accepted at ICPR2020, 8 pages, 6 figures 
Viaarxiv icon

Stem-leaf segmentation and phenotypic trait extraction of maize shoots from three-dimensional point cloud

Sep 07, 2020
Chao Zhu, Teng Miao, Tongyu Xu, Tao Yang, Na Li

Figure 1 for Stem-leaf segmentation and phenotypic trait extraction of maize shoots from three-dimensional point cloud
Figure 2 for Stem-leaf segmentation and phenotypic trait extraction of maize shoots from three-dimensional point cloud
Figure 3 for Stem-leaf segmentation and phenotypic trait extraction of maize shoots from three-dimensional point cloud
Figure 4 for Stem-leaf segmentation and phenotypic trait extraction of maize shoots from three-dimensional point cloud

Nowadays, there are many approaches to acquire three-dimensional (3D) point clouds of maize plants. However, automatic stem-leaf segmentation of maize shoots from three-dimensional (3D) point clouds remains challenging, especially for new emerging leaves that are very close and wrapped together during the seedling stage. To address this issue, we propose an automatic segmentation method consisting of three main steps: skeleton extraction, coarse segmentation based on the skeleton, fine segmentation based on stem-leaf classification. The segmentation method was tested on 30 maize seedlings and compared with manually obtained ground truth. The mean precision, mean recall, mean micro F1 score and mean over accuracy of our segmentation algorithm were 0.964, 0.966, 0.963 and 0.969. Using the segmentation results, two applications were also developed in this paper, including phenotypic trait extraction and skeleton optimization. Six phenotypic parameters can be accurately and automatically measured, including plant height, crown diameter, stem height and diameter, leaf width and length. Furthermore, the values of R2 for the six phenotypic traits were all above 0.94. The results indicated that the proposed algorithm could automatically and precisely segment not only the fully expanded leaves, but also the new leaves wrapped together and close together. The proposed approach may play an important role in further maize research and applications, such as genotype-to-phenotype study, geometric reconstruction and dynamic growth animation. We released the source code and test data at the web site https://github.com/syau-miao/seg4maize.git

Viaarxiv icon

Machine learning driven synthesis of few-layered WTe2

Oct 10, 2019
Manzhang Xu, Bijun Tang, Chao Zhu, Yuhao Lu, Chao Zhu, Lu Zheng, Jingyu Zhang, Nannan Han, Yuxi Guo, Jun Di, Pin Song, Yongmin He, Lixing Kang, Zhiyong Zhang, Wu Zhao, Cuntai Guan, Xuewen Wang, Zheng Liu

Figure 1 for Machine learning driven synthesis of few-layered WTe2
Figure 2 for Machine learning driven synthesis of few-layered WTe2
Figure 3 for Machine learning driven synthesis of few-layered WTe2
Figure 4 for Machine learning driven synthesis of few-layered WTe2

Reducing the lateral scale of two-dimensional (2D) materials to one-dimensional (1D) has attracted substantial research interest not only to achieve competitive electronic device applications but also for the exploration of fundamental physical properties. Controllable synthesis of high-quality 1D nanoribbons (NRs) is thus highly desirable and essential for the further study. Traditional exploration of the optimal synthesis conditions of novel materials is based on the trial-and-error approach, which is time consuming, costly and laborious. Recently, machine learning (ML) has demonstrated promising capability in guiding material synthesis through effectively learning from the past data and then making recommendations. Here, we report the implementation of supervised ML for the chemical vapor deposition (CVD) synthesis of high-quality 1D few-layered WTe2 nanoribbons (NRs). The synthesis parameters of the WTe2 NRs are optimized by the trained ML model. On top of that, the growth mechanism of as-synthesized 1T' few-layered WTe2 NRs is further proposed, which may inspire the growth strategies for other 1D nanostructures. Our findings suggest that ML is a powerful and efficient approach to aid the synthesis of 1D nanostructures, opening up new opportunities for intelligent material development.

Viaarxiv icon