Alert button
Picture for Bingzheng Wei

Bingzheng Wei

Alert button

DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis

Jul 11, 2023
Zhiwen Yang, Yang Zhou, Hui Zhang, Bingzheng Wei, Yubo Fan, Yan Xu

Figure 1 for DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis
Figure 2 for DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis
Figure 3 for DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis
Figure 4 for DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis

Multi-center positron emission tomography (PET) image synthesis aims at recovering low-dose PET images from multiple different centers. The generalizability of existing methods can still be suboptimal for a multi-center study due to domain shifts, which result from non-identical data distribution among centers with different imaging systems/protocols. While some approaches address domain shifts by training specialized models for each center, they are parameter inefficient and do not well exploit the shared knowledge across centers. To address this, we develop a generalist model that shares architecture and parameters across centers to utilize the shared knowledge. However, the generalist model can suffer from the center interference issue, \textit{i.e.} the gradient directions of different centers can be inconsistent or even opposite owing to the non-identical data distribution. To mitigate such interference, we introduce a novel dynamic routing strategy with cross-layer connections that routes data from different centers to different experts. Experiments show that our generalist model with dynamic routing (DRMC) exhibits excellent generalizability across centers. Code and data are available at: https://github.com/Yaziwel/Multi-Center-PET-Image-Synthesis.

* This article has been early accepted by MICCAI 2023,but has not been fully edited. Content may change prior to final publication 
Viaarxiv icon

Zero-shot Nuclei Detection via Visual-Language Pre-trained Models

Jun 30, 2023
Yongjian Wu, Yang Zhou, Jiya Saiyin, Bingzheng Wei, Maode Lai, Jianzhong Shou, Yubo Fan, Yan Xu

Figure 1 for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models
Figure 2 for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models
Figure 3 for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models
Figure 4 for Zero-shot Nuclei Detection via Visual-Language Pre-trained Models

Large-scale visual-language pre-trained models (VLPM) have proven their excellent performance in downstream object detection for natural scenes. However, zero-shot nuclei detection on H\&E images via VLPMs remains underexplored. The large gap between medical images and the web-originated text-image pairs used for pre-training makes it a challenging task. In this paper, we attempt to explore the potential of the object-level VLPM, Grounded Language-Image Pre-training (GLIP) model, for zero-shot nuclei detection. Concretely, an automatic prompts design pipeline is devised based on the association binding trait of VLPM and the image-to-text VLPM BLIP, avoiding empirical manual prompts engineering. We further establish a self-training framework, using the automatically designed prompts to generate the preliminary results as pseudo labels from GLIP and refine the predicted boxes in an iterative manner. Our method achieves a remarkable performance for label-free nuclei detection, surpassing other comparison methods. Foremost, our work demonstrates that the VLPM pre-trained on natural image-text pairs exhibits astonishing potential for downstream tasks in the medical field as well. Code will be released at https://github.com/wuyongjianCODE/VLPMNuD.

* This article has been accepted by MICCAI 2023,but has not been fully edited. Content may change prior to final publication 
Viaarxiv icon

Cyclic Learning: Bridging Image-level Labels and Nuclei Instance Segmentation

Jun 05, 2023
Yang Zhou, Yongjian Wu, Zihua Wang, Bingzheng Wei, Maode Lai, Jianzhong Shou, Yubo Fan, Yan Xu

Figure 1 for Cyclic Learning: Bridging Image-level Labels and Nuclei Instance Segmentation
Figure 2 for Cyclic Learning: Bridging Image-level Labels and Nuclei Instance Segmentation
Figure 3 for Cyclic Learning: Bridging Image-level Labels and Nuclei Instance Segmentation
Figure 4 for Cyclic Learning: Bridging Image-level Labels and Nuclei Instance Segmentation

Nuclei instance segmentation on histopathology images is of great clinical value for disease analysis. Generally, fully-supervised algorithms for this task require pixel-wise manual annotations, which is especially time-consuming and laborious for the high nuclei density. To alleviate the annotation burden, we seek to solve the problem through image-level weakly supervised learning, which is underexplored for nuclei instance segmentation. Compared with most existing methods using other weak annotations (scribble, point, etc.) for nuclei instance segmentation, our method is more labor-saving. The obstacle to using image-level annotations in nuclei instance segmentation is the lack of adequate location information, leading to severe nuclei omission or overlaps. In this paper, we propose a novel image-level weakly supervised method, called cyclic learning, to solve this problem. Cyclic learning comprises a front-end classification task and a back-end semi-supervised instance segmentation task to benefit from multi-task learning (MTL). We utilize a deep learning classifier with interpretability as the front-end to convert image-level labels to sets of high-confidence pseudo masks and establish a semi-supervised architecture as the back-end to conduct nuclei instance segmentation under the supervision of these pseudo masks. Most importantly, cyclic learning is designed to circularly share knowledge between the front-end classifier and the back-end semi-supervised part, which allows the whole system to fully extract the underlying information from image-level labels and converge to a better optimum. Experiments on three datasets demonstrate the good generality of our method, which outperforms other image-level weakly supervised methods for nuclei instance segmentation, and achieves comparable performance to fully-supervised methods.

* This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI https://doi.org/10.1109/TMI.2023.3275609, IEEE Transactions on Medical Imaging. Code: https://github.com/wuyongjianCODE/Cyclic 
Viaarxiv icon

Transformer based multiple instance learning for weakly supervised histopathology image segmentation

May 18, 2022
Ziniu Qian, Kailu Li, Maode Lai, Eric I-Chao Chang, Bingzheng Wei, Yubo Fan, Yan Xu

Figure 1 for Transformer based multiple instance learning for weakly supervised histopathology image segmentation
Figure 2 for Transformer based multiple instance learning for weakly supervised histopathology image segmentation
Figure 3 for Transformer based multiple instance learning for weakly supervised histopathology image segmentation
Figure 4 for Transformer based multiple instance learning for weakly supervised histopathology image segmentation

Hispathological image segmentation algorithms play a critical role in computer aided diagnosis technology. The development of weakly supervised segmentation algorithm alleviates the problem of medical image annotation that it is time-consuming and labor-intensive. As a subset of weakly supervised learning, Multiple Instance Learning (MIL) has been proven to be effective in segmentation. However, there is a lack of related information between instances in MIL, which limits the further improvement of segmentation performance. In this paper, we propose a novel weakly supervised method for pixel-level segmentation in histopathology images, which introduces Transformer into the MIL framework to capture global or long-range dependencies. The multi-head self-attention in the Transformer establishes the relationship between instances, which solves the shortcoming that instances are independent of each other in MIL. In addition, deep supervision is introduced to overcome the limitation of annotations in weakly supervised methods and make the better utilization of hierarchical information. The state-of-the-art results on the colon cancer dataset demonstrate the superiority of the proposed method compared with other weakly supervised methods. It is worth believing that there is a potential of our approach for various applications in medical images.

* Provisional accepted for MICCAI 2022 
Viaarxiv icon

Group based Personalized Search by Integrating Search Behaviour and Friend Network

Nov 24, 2021
Yujia Zhou, Zhicheng Dou, Bingzheng Wei, Ruobing Xievand Ji-Rong Wen

Figure 1 for Group based Personalized Search by Integrating Search Behaviour and Friend Network
Figure 2 for Group based Personalized Search by Integrating Search Behaviour and Friend Network
Figure 3 for Group based Personalized Search by Integrating Search Behaviour and Friend Network
Figure 4 for Group based Personalized Search by Integrating Search Behaviour and Friend Network

The key to personalized search is to build the user profile based on historical behaviour. To deal with the users who lack historical data, group based personalized models were proposed to incorporate the profiles of similar users when re-ranking the results. However, similar users are mostly found based on simple lexical or topical similarity in search behaviours. In this paper, we propose a neural network enhanced method to highlight similar users in semantic space. Furthermore, we argue that the behaviour-based similar users are still insufficient to understand a new query when user's historical activities are limited. To tackle this issue, we introduce the friend network into personalized search to determine the closeness between users in another way. Since the friendship is often formed based on similar background or interest, there are plenty of personalized signals hidden in the friend network naturally. Specifically, we propose a friend network enhanced personalized search model, which groups the user into multiple friend circles based on search behaviours and friend relations respectively. These two types of friend circles are complementary to construct a more comprehensive group profile for refining the personalization. Experimental results show the significant improvement of our model over existing personalized search models.

* 10 pages 
Viaarxiv icon

Coupled Graph Neural Networks for Predicting the Popularity of Online Content

Jun 21, 2019
Qi Cao, Huawei Shen, Jinhua Gao, Bingzheng Wei, Xueqi Cheng

Figure 1 for Coupled Graph Neural Networks for Predicting the Popularity of Online Content
Figure 2 for Coupled Graph Neural Networks for Predicting the Popularity of Online Content
Figure 3 for Coupled Graph Neural Networks for Predicting the Popularity of Online Content
Figure 4 for Coupled Graph Neural Networks for Predicting the Popularity of Online Content

Predicting the popularity of online content in social network is an important problem for the practice of information dissemination, advertising, and recommendation. Previous methods mainly leverage demographics, temporal and structural patterns of early adopters for popularity prediction. These methods ignore the interaction between early adopters and potential adopters or the interactions among potential adopters over social networks. Consequently, they fail to capture the cascading effect triggered by early adopters in social networks, and thus have limited predictive power. In this paper, we consider the problem of network-aware popularity prediction, leveraging both early adopters and social networks among users for popularity prediction. We propose a novel method, namely Coupled-GNNs, which use two coupled graph neural networks to capture the cascading effect in information diffusion. One graph neural network models the interpersonal influence, gated by the adoption state of users. The other graph neural network models the adoption state of users via interpersonal influence from their neighbors. Through such an iterative aggregation of the neighborhood, the proposed method naturally captures the cascading effect of information diffusion in social networks. Experiments conducted on both synthetic data and real-world Sina Weibo data demonstrate that our method significantly outperforms the state-of-the-art methods for popularity prediction.

Viaarxiv icon