Alert button
Picture for Wenhua Zhang

Wenhua Zhang

Alert button

Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking

Aug 01, 2023
Mingzhan Yang, Guangxin Han, Bin Yan, Wenhua Zhang, Jinqing Qi, Huchuan Lu, Dong Wang

Figure 1 for Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking
Figure 2 for Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking
Figure 3 for Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking
Figure 4 for Hybrid-SORT: Weak Cues Matter for Online Multi-Object Tracking

Multi-Object Tracking (MOT) aims to detect and associate all desired objects across frames. Most methods accomplish the task by explicitly or implicitly leveraging strong cues (i.e., spatial and appearance information), which exhibit powerful instance-level discrimination. However, when object occlusion and clustering occur, both spatial and appearance information will become ambiguous simultaneously due to the high overlap between objects. In this paper, we demonstrate that this long-standing challenge in MOT can be efficiently and effectively resolved by incorporating weak cues to compensate for strong cues. Along with velocity direction, we introduce the confidence state and height state as potential weak cues. With superior performance, our method still maintains Simple, Online and Real-Time (SORT) characteristics. Furthermore, our method shows strong generalization for diverse trackers and scenarios in a plug-and-play and training-free manner. Significant and consistent improvements are observed when applying our method to 5 different representative trackers. Further, by leveraging both strong and weak cues, our method Hybrid-SORT achieves superior performance on diverse benchmarks, including MOT17, MOT20, and especially DanceTrack where interaction and occlusion are frequent and severe. The code and models are available at https://github.com/ymzis69/HybirdSORT.

Viaarxiv icon

CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Mar 14, 2023
Simon Graham, Quoc Dang Vu, Mostafa Jahanifar, Martin Weigert, Uwe Schmidt, Wenhua Zhang, Jun Zhang, Sen Yang, Jinxi Xiang, Xiyue Wang, Josef Lorenz Rumberger, Elias Baumann, Peter Hirsch, Lihao Liu, Chenyang Hong, Angelica I. Aviles-Rivero, Ayushi Jain, Heeyoung Ahn, Yiyu Hong, Hussam Azzuni, Min Xu, Mohammad Yaqub, Marie-Claire Blache, Benoît Piégu, Bertrand Vernay, Tim Scherr, Moritz Böhland, Katharina Löffler, Jiachen Li, Weiqin Ying, Chixin Wang, Dagmar Kainmueller, Carola-Bibiane Schönlieb, Shuolin Liu, Dhairya Talsania, Yughender Meda, Prakash Mishra, Muhammad Ridzuan, Oliver Neumann, Marcel P. Schilling, Markus Reischl, Ralf Mikut, Banban Huang, Hsiang-Chin Chien, Ching-Ping Wang, Chia-Yen Lee, Hong-Kun Lin, Zaiyi Liu, Xipeng Pan, Chu Han, Jijun Cheng, Muhammad Dawood, Srijay Deshpande, Raja Muhammad Saad Bashir, Adam Shephard, Pedro Costa, João D. Nunes, Aurélio Campilho, Jaime S. Cardoso, Hrishikesh P S, Densen Puthussery, Devika R G, Jiji C V, Ye Zhang, Zijie Fang, Zhifan Lin, Yongbing Zhang, Chunhui Lin, Liukun Zhang, Lijian Mao, Min Wu, Vi Thi-Tuong Vo, Soo-Hyung Kim, Taebum Lee, Satoshi Kondo, Satoshi Kasai, Pranay Dumbhare, Vedant Phuse, Yash Dubey, Ankush Jamthikar, Trinh Thi Le Vuong, Jin Tae Kwak, Dorsa Ziaei, Hyun Jung, Tianyi Miao, David Snead, Shan E Ahmed Raza, Fayyaz Minhas, Nasir M. Rajpoot

Figure 1 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 2 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 3 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Figure 4 for CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting

Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.

Viaarxiv icon

dual unet:a novel siamese network for change detection with cascade differential fusion

Aug 12, 2022
Kaixuan Jiang, Ja Liu, Fang Liu, Wenhua Zhang, Yangguang Liu

Figure 1 for dual unet:a novel siamese network for change detection with cascade differential fusion
Figure 2 for dual unet:a novel siamese network for change detection with cascade differential fusion

Change detection (CD) of remote sensing images is to detect the change region by analyzing the difference between two bitemporal images. It is extensively used in land resource planning, natural hazards monitoring and other fields. In our study, we propose a novel Siamese neural network for change detection task, namely Dual-UNet. In contrast to previous individually encoded the bitemporal images, we design an encoder differential-attention module to focus on the spatial difference relationships of pixels. In order to improve the generalization of networks, it computes the attention weights between any pixels between bitemporal images and uses them to engender more discriminating features. In order to improve the feature fusion and avoid gradient vanishing, multi-scale weighted variance map fusion strategy is proposed in the decoding stage. Experiments demonstrate that the proposed approach consistently outperforms the most advanced methods on popular seasonal change detection datasets.

Viaarxiv icon

A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images

Jun 02, 2022
Donghui Li, Jia Liu, Fang Liu, Wenhua Zhang, Andi Zhang, Wenfei Gao, Jiao Shi

Figure 1 for A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images
Figure 2 for A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images
Figure 3 for A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images
Figure 4 for A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images

Deep learning based semantic segmentation is one of the popular methods in remote sensing image segmentation. In this paper, a network based on the widely used encoderdecoder architecture is proposed to accomplish the synthetic aperture radar (SAR) images segmentation. With the better representation capability of optical images, we propose to enrich SAR images with generated optical images via the generative adversative network (GAN) trained by numerous SAR and optical images. These optical images can be used as expansions of original SAR images, thus ensuring robust result of segmentation. Then the optical images generated by the GAN are stitched together with the corresponding real images. An attention module following the stitched data is used to strengthen the representation of the objects. Experiments indicate that our method is efficient compared to other commonly used methods

* 4 pages,4 figures, 2022 IEEE International Geoscience and Remote Sensing Symposium 
Viaarxiv icon

AugHover-Net: Augmenting Hover-net for Nucleus Segmentation and Classification

Apr 02, 2022
Wenhua Zhang, Jun Zhang

Figure 1 for AugHover-Net: Augmenting Hover-net for Nucleus Segmentation and Classification
Figure 2 for AugHover-Net: Augmenting Hover-net for Nucleus Segmentation and Classification

Nuclei segmentation and classification have been a challenge in digital pathology due to the specific domain characteristics. First, annotating a large-scale dataset is quite consuming. It requires specific domain knowledge and large efforts. Second, some nuclei are clustered together and hard to segment from each other. Third, the classes are often extremely unbalanced. As in Lizard, the number of epithelial nuclei is around 67 times larger than the number of eosinophil nuclei. Fourth, the nuclei often exhibit high inter-class similarity and intra-class variability. Connective nuclei may look very different from each other while some of them share a similar shape with the epithelial ones. Last but not least, pathological patches may have very different color distributions among different datasets. Thus, a large-scale generally annotated dataset and a specially-designed algorithm are needed to solve this problem. The CoNIC challenge aims to promote the automatic segmentation and classification task and requires researchers to develop algorithms that perform segmentation, classification, and counting of 6 different types of nuclei with the large-scale annotated dataset: Lizard. Due to the 60-minute time limit, the algorithm has to be simple and quick. In this paper, we briefly describe the final method we used in the CoNIC challenge. Our algorithm is based on Hover-Net and we added several modifications to it to improve its performance.

Viaarxiv icon

CoNIC Solution

Mar 04, 2022
Wenhua Zhang

Figure 1 for CoNIC Solution
Figure 2 for CoNIC Solution

Nuclei segmentation and classification has been a challenge due to the high inter-class similarity and intra-class variability. Thus, a large-scale annotation and a specially-designed algorithm are needed to solve this problem. Lizard is therefore a great promotion in this area, containing around half a million nuclei annotated. In this paper, we propose a two-stage pipeline used in the CoNIC competition, which achieves much better results than the baseline method. We adopt a similar model as the original baseline method: HoVerNet, as the segmentaion model and then develop a new classification model to fine-tune the classification results. Code for this method will be made public soon. This is a conic solution in testing.

Viaarxiv icon

Hierarchical Feature-Aware Tracking

Oct 18, 2019
Wenhua Zhang, Licheng Jiao, Jia Liu

Figure 1 for Hierarchical Feature-Aware Tracking
Figure 2 for Hierarchical Feature-Aware Tracking
Figure 3 for Hierarchical Feature-Aware Tracking
Figure 4 for Hierarchical Feature-Aware Tracking

In this paper, we propose a hierarchical feature-aware tracking framework for efficient visual tracking. Recent years, ensembled trackers which combine multiple component trackers have achieved impressive performance. In ensembled trackers, the decision of results is usually a post-event process, i.e., tracking result for each tracker is first obtained and then the suitable one is selected according to result ensemble. In this paper, we propose a pre-event method. We construct an expert pool with each expert being one set of features. For each frame, several experts are first selected in the pool according to their past performance and then they are used to predict the object. The selection rate of each expert in the pool is then updated and tracking result is obtained according to result ensemble. We propose a novel pre-known expert-adaptive selection strategy. Since the process is more efficient, more experts can be constructed by fusing more types of features which leads to more robustness. Moreover, with the novel expert selection strategy, overfitting caused by fixed experts for each frame can be mitigated. Experiments on several public available datasets demonstrate the superiority of the proposed method and its state-of-the-art performance among ensembled trackers.

Viaarxiv icon