Alert button
Picture for Haofeng Li

Haofeng Li

Alert button

The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions

Aug 10, 2023
Jun Ma, Ronald Xie, Shamini Ayyadhury, Cheng Ge, Anubha Gupta, Ritu Gupta, Song Gu, Yao Zhang, Gihun Lee, Joonkee Kim, Wei Lou, Haofeng Li, Eric Upschulte, Timo Dickscheid, José Guilherme de Almeida, Yixin Wang, Lin Han, Xin Yang, Marco Labagnara, Sahand Jamal Rahi, Carly Kempster, Alice Pollitt, Leon Espinosa, Tâm Mignot, Jan Moritz Middeke, Jan-Niklas Eckardt, Wangkai Li, Zhaoyang Li, Xiaochen Cai, Bizhe Bai, Noah F. Greenwald, David Van Valen, Erin Weisbart, Beth A. Cimini, Zhuoshi Li, Chao Zuo, Oscar Brück, Gary D. Bader, Bo Wang

Figure 1 for The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions
Figure 2 for The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions
Figure 3 for The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions
Figure 4 for The Multi-modality Cell Segmentation Challenge: Towards Universal Solutions

Cell segmentation is a critical step for quantitative single-cell analysis in microscopy images. Existing cell segmentation methods are often tailored to specific modalities or require manual interventions to specify hyperparameters in different experimental settings. Here, we present a multi-modality cell segmentation benchmark, comprising over 1500 labeled images derived from more than 50 diverse biological experiments. The top participants developed a Transformer-based deep-learning algorithm that not only exceeds existing methods, but can also be applied to diverse microscopy images across imaging platforms and tissue types without manual parameter adjustments. This benchmark and the improved algorithm offer promising avenues for more accurate and versatile cell analysis in microscopy imaging.

* NeurIPS22 Cell Segmentation Challenge: https://neurips22-cellseg.grand-challenge.org/ 
Viaarxiv icon

Structure Embedded Nucleus Classification for Histopathology Images

Feb 22, 2023
Wei Lou, Xiang Wan, Guanbin Li, Xiaoying Lou, Chenghang Li, Feng Gao, Haofeng Li

Figure 1 for Structure Embedded Nucleus Classification for Histopathology Images
Figure 2 for Structure Embedded Nucleus Classification for Histopathology Images
Figure 3 for Structure Embedded Nucleus Classification for Histopathology Images
Figure 4 for Structure Embedded Nucleus Classification for Histopathology Images

Nuclei classification provides valuable information for histopathology image analysis. However, the large variations in the appearance of different nuclei types cause difficulties in identifying nuclei. Most neural network based methods are affected by the local receptive field of convolutions, and pay less attention to the spatial distribution of nuclei or the irregular contour shape of a nucleus. In this paper, we first propose a novel polygon-structure feature learning mechanism that transforms a nucleus contour into a sequence of points sampled in order, and employ a recurrent neural network that aggregates the sequential change in distance between key points to obtain learnable shape features. Next, we convert a histopathology image into a graph structure with nuclei as nodes, and build a graph neural network to embed the spatial distribution of nuclei into their representations. To capture the correlations between the categories of nuclei and their surrounding tissue patterns, we further introduce edge features that are defined as the background textures between adjacent nuclei. Lastly, we integrate both polygon and graph structure learning mechanisms into a whole framework that can extract intra and inter-nucleus structural characteristics for nuclei classification. Experimental results show that the proposed framework achieves significant improvements compared to the state-of-the-art methods.

Viaarxiv icon

Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework

Dec 20, 2022
Wei Lou, Haofeng Li, Guanbin Li, Xiaoguang Han, Xiang Wan

Figure 1 for Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework
Figure 2 for Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework
Figure 3 for Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework
Figure 4 for Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework

Recently deep neural networks, which require a large amount of annotated samples, have been widely applied in nuclei instance segmentation of H\&E stained pathology images. However, it is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns. Although unsupervised and semi-supervised learning methods have been studied for nuclei segmentation, very few works have delved into the selective labeling of samples to reduce the workload of annotation. Thus, in this paper, we propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner. In the proposed framework, we first develop a novel consistency-based patch selection method to determine which image patches are the most beneficial to the training. Then we introduce a conditional single-image GAN with a component-wise discriminator, to synthesize more training samples. Lastly, our proposed framework trains an existing segmentation model with the above augmented samples. The experimental results show that our proposed method could obtain the same-level performance as a fully-supervised baseline by annotating less than 5% pixels on some benchmarks.

* IEEE TMI 2022, Released code: https://github.com/lhaof/NuSeg 
Viaarxiv icon

View-Disentangled Transformer for Brain Lesion Detection

Sep 20, 2022
Haofeng Li, Junjia Huang, Guanbin Li, Zhou Liu, Yihong Zhong, Yingying Chen, Yunfei Wang, Xiang Wan

Figure 1 for View-Disentangled Transformer for Brain Lesion Detection
Figure 2 for View-Disentangled Transformer for Brain Lesion Detection
Figure 3 for View-Disentangled Transformer for Brain Lesion Detection
Figure 4 for View-Disentangled Transformer for Brain Lesion Detection

Deep neural networks (DNNs) have been widely adopted in brain lesion detection and segmentation. However, locating small lesions in 2D MRI slices is challenging, and requires to balance between the granularity of 3D context aggregation and the computational complexity. In this paper, we propose a novel view-disentangled transformer to enhance the extraction of MRI features for more accurate tumour detection. First, the proposed transformer harvests long-range correlation among different positions in a 3D brain scan. Second, the transformer models a stack of slice features as multiple 2D views and enhance these features view-by-view, which approximately achieves the 3D correlation computing in an efficient way. Third, we deploy the proposed transformer module in a transformer backbone, which can effectively detect the 2D regions surrounding brain lesions. The experimental results show that our proposed view-disentangled transformer performs well for brain lesion detection on a challenging brain MRI dataset.

* International Symposium on Biomedical Imaging (ISBI) 2022, code: https://github.com/lhaof/ISBI-VDFormer 
Viaarxiv icon

Attentive Symmetric Autoencoder for Brain MRI Segmentation

Sep 19, 2022
Junjia Huang, Haofeng Li, Guanbin Li, Xiang Wan

Self-supervised learning methods based on image patch reconstruction have witnessed great success in training auto-encoders, whose pre-trained weights can be transferred to fine-tune other downstream tasks of image understanding. However, existing methods seldom study the various importance of reconstructed patches and the symmetry of anatomical structures, when they are applied to 3D medical images. In this paper we propose a novel Attentive Symmetric Auto-encoder (ASA) based on Vision Transformer (ViT) for 3D brain MRI segmentation tasks. We conjecture that forcing the auto-encoder to recover informative image regions can harvest more discriminative representations, than to recover smooth image patches. Then we adopt a gradient based metric to estimate the importance of each image patch. In the pre-training stage, the proposed auto-encoder pays more attention to reconstruct the informative patches according to the gradient metrics. Moreover, we resort to the prior of brain structures and develop a Symmetric Position Encoding (SPE) method to better exploit the correlations between long-range but spatially symmetric regions to obtain effective features. Experimental results show that our proposed attentive symmetric auto-encoder outperforms the state-of-the-art self-supervised learning methods and medical image segmentation models on three brain MRI segmentation benchmarks.

* MICCAI 2022, code:https://github.com/lhaof/ASA 
Viaarxiv icon

Robust Real-World Image Super-Resolution against Adversarial Attacks

Jul 31, 2022
Jiutao Yue, Haofeng Li, Pengxu Wei, Guanbin Li, Liang Lin

Figure 1 for Robust Real-World Image Super-Resolution against Adversarial Attacks
Figure 2 for Robust Real-World Image Super-Resolution against Adversarial Attacks
Figure 3 for Robust Real-World Image Super-Resolution against Adversarial Attacks
Figure 4 for Robust Real-World Image Super-Resolution against Adversarial Attacks

Recently deep neural networks (DNNs) have achieved significant success in real-world image super-resolution (SR). However, adversarial image samples with quasi-imperceptible noises could threaten deep learning SR models. In this paper, we propose a robust deep learning framework for real-world SR that randomly erases potential adversarial noises in the frequency domain of input images or features. The rationale is that on the SR task clean images or features have a different pattern from the attacked ones in the frequency domain. Observing that existing adversarial attacks usually add high-frequency noises to input images, we introduce a novel random frequency mask module that blocks out high-frequency components possibly containing the harmful perturbations in a stochastic manner. Since the frequency masking may not only destroys the adversarial perturbations but also affects the sharp details in a clean image, we further develop an adversarial sample classifier based on the frequency domain of images to determine if applying the proposed mask module. Based on the above ideas, we devise a novel real-world image SR framework that combines the proposed frequency mask modules and the proposed adversarial classifier with an existing super-resolution backbone network. Experiments show that our proposed method is more insensitive to adversarial attacks and presents more stable SR results than existing models and defenses.

* Proceedings of the 29th ACM International Conference on Multimedia (2021) 5148-5157  
* ACM-MM 2021, Code: https://github.com/lhaof/Robust-SR-against-Adversarial-Attacks 
Viaarxiv icon

BronchusNet: Region and Structure Prior Embedded Representation Learning for Bronchus Segmentation and Classification

May 24, 2022
Wenhao Huang, Haifan Gong, Huan Zhang, Yu Wang, Haofeng Li, Guanbin Li, Hong Shen

Figure 1 for BronchusNet: Region and Structure Prior Embedded Representation Learning for Bronchus Segmentation and Classification
Figure 2 for BronchusNet: Region and Structure Prior Embedded Representation Learning for Bronchus Segmentation and Classification
Figure 3 for BronchusNet: Region and Structure Prior Embedded Representation Learning for Bronchus Segmentation and Classification
Figure 4 for BronchusNet: Region and Structure Prior Embedded Representation Learning for Bronchus Segmentation and Classification

CT-based bronchial tree analysis plays an important role in the computer-aided diagnosis for respiratory diseases, as it could provide structured information for clinicians. The basis of airway analysis is bronchial tree reconstruction, which consists of bronchus segmentation and classification. However, there remains a challenge for accurate bronchial analysis due to the individual variations and the severe class imbalance. In this paper, we propose a region and structure prior embedded framework named BronchusNet to achieve accurate segmentation and classification of bronchial regions in CT images. For bronchus segmentation, we propose an adaptive hard region-aware UNet that incorporates multi-level prior guidance of hard pixel-wise samples in the general Unet segmentation network to achieve better hierarchical feature learning. For the classification of bronchial branches, we propose a hybrid point-voxel graph learning module to fully exploit bronchial structure priors and to support simultaneous feature interactions across different branches. To facilitate the study of bronchial analysis, we contribute~\textbf{BRSC}: an open-access benchmark of \textbf{BR}onchus imaging analysis with high-quality pixel-wise \textbf{S}egmentation masks and the \textbf{C}lass of bronchial segments. Experimental results on BRSC show that our proposed method not only achieves the state-of-the-art performance for binary segmentation of bronchial region but also exceeds the best existing method on bronchial branches classification by 6.9\%.

Viaarxiv icon

SSMD: Semi-Supervised Medical Image Detection with Adaptive Consistency and Heterogeneous Perturbation

Jun 03, 2021
Hong-Yu Zhou, Chengdi Wang, Haofeng Li, Gang Wang, Shu Zhang, Weimin Li, Yizhou Yu

Semi-Supervised classification and segmentation methods have been widely investigated in medical image analysis. Both approaches can improve the performance of fully-supervised methods with additional unlabeled data. However, as a fundamental task, semi-supervised object detection has not gained enough attention in the field of medical image analysis. In this paper, we propose a novel Semi-Supervised Medical image Detector (SSMD). The motivation behind SSMD is to provide free yet effective supervision for unlabeled data, by regularizing the predictions at each position to be consistent. To achieve the above idea, we develop a novel adaptive consistency cost function to regularize different components in the predictions. Moreover, we introduce heterogeneous perturbation strategies that work in both feature space and image space, so that the proposed detector is promising to produce powerful image representations and robust predictions. Extensive experimental results show that the proposed SSMD achieves the state-of-the-art performance at a wide range of settings. We also demonstrate the strength of each proposed module with comprehensive ablation studies.

* Accepted by Medical Image Analysis 
Viaarxiv icon

Online Alternate Generator against Adversarial Attacks

Sep 17, 2020
Haofeng Li, Yirui Zeng, Guanbin Li, Liang Lin, Yizhou Yu

Figure 1 for Online Alternate Generator against Adversarial Attacks
Figure 2 for Online Alternate Generator against Adversarial Attacks
Figure 3 for Online Alternate Generator against Adversarial Attacks
Figure 4 for Online Alternate Generator against Adversarial Attacks

The field of computer vision has witnessed phenomenal progress in recent years partially due to the development of deep convolutional neural networks. However, deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images. Some existing defense methods require to re-train attacked target networks and augment the train set via known adversarial attacks, which is inefficient and might be unpromising with unknown attack types. To overcome the above issues, we propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks. The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises. To avoid pretrained parameters exploited by attackers, we alternately update the generator and the synthesized image at the inference stage. Experimental results demonstrate that the proposed defensive scheme and method outperforms a series of state-of-the-art defending models against gray-box adversarial attacks.

* Accepted as a Regular paper in the IEEE Transactions on Image Processing 
Viaarxiv icon