Alert button
Picture for Xiaofan Zhang

Xiaofan Zhang

Alert button

Automatic lobe segmentation using attentive cross entropy and end-to-end fissure generation

Jul 24, 2023
Qi Su, Na Wang, Jiawen Xie, Yinan Chen, Xiaofan Zhang

The automatic lung lobe segmentation algorithm is of great significance for the diagnosis and treatment of lung diseases, however, which has great challenges due to the incompleteness of pulmonary fissures in lung CT images and the large variability of pathological features. Therefore, we propose a new automatic lung lobe segmentation framework, in which we urge the model to pay attention to the area around the pulmonary fissure during the training process, which is realized by a task-specific loss function. In addition, we introduce an end-to-end pulmonary fissure generation method in the auxiliary pulmonary fissure segmentation task, without any additional network branch. Finally, we propose a registration-based loss function to alleviate the convergence difficulty of the Dice loss supervised pulmonary fissure segmentation task. We achieve 97.83% and 94.75% dice scores on our private dataset STLB and public LUNA16 dataset respectively.

* 5 pages, 3 figures, published to 'IEEE International Symposium on Biomedical Imaging (ISBI) 2023' 
Viaarxiv icon

Efficient Subclass Segmentation in Medical Images

Jul 01, 2023
Linrui Dai, Wenhui Lei, Xiaofan Zhang

Figure 1 for Efficient Subclass Segmentation in Medical Images
Figure 2 for Efficient Subclass Segmentation in Medical Images
Figure 3 for Efficient Subclass Segmentation in Medical Images
Figure 4 for Efficient Subclass Segmentation in Medical Images

As research interests in medical image analysis become increasingly fine-grained, the cost for extensive annotation also rises. One feasible way to reduce the cost is to annotate with coarse-grained superclass labels while using limited fine-grained annotations as a complement. In this way, fine-grained data learning is assisted by ample coarse annotations. Recent studies in classification tasks have adopted this method to achieve satisfactory results. However, there is a lack of research on efficient learning of fine-grained subclasses in semantic segmentation tasks. In this paper, we propose a novel approach that leverages the hierarchical structure of categories to design network architecture. Meanwhile, a task-driven data generation method is presented to make it easier for the network to recognize different subclass categories. Specifically, we introduce a Prior Concatenation module that enhances confidence in subclass segmentation by concatenating predicted logits from the superclass classifier, a Separate Normalization module that stretches the intra-class distance within the same superclass to facilitate subclass segmentation, and a HierarchicalMix model that generates high-quality pseudo labels for unlabeled samples by fusing only similar superclass regions from labeled and unlabeled images. Our experiments on the BraTS2021 and ACDC datasets demonstrate that our approach achieves comparable accuracy to a model trained with full subclass annotations, with limited subclass annotations and sufficient superclass annotations. Our approach offers a promising solution for efficient fine-grained subclass segmentation in medical images. Our code is publicly available here.

* MICCAI 2023 early accept 
Viaarxiv icon

MedLSAM: Localize and Segment Anything Model for 3D Medical Images

Jun 30, 2023
Wenhui Lei, Xu Wei, Xiaofan Zhang, Kang Li, Shaoting Zhang

Figure 1 for MedLSAM: Localize and Segment Anything Model for 3D Medical Images
Figure 2 for MedLSAM: Localize and Segment Anything Model for 3D Medical Images
Figure 3 for MedLSAM: Localize and Segment Anything Model for 3D Medical Images
Figure 4 for MedLSAM: Localize and Segment Anything Model for 3D Medical Images

The Segment Anything Model (SAM) has recently emerged as a groundbreaking model in the field of image segmentation. Nevertheless, both the original SAM and its medical adaptations necessitate slice-by-slice annotations, which directly increase the annotation workload with the size of the dataset. We propose MedLSAM to address this issue, ensuring a constant annotation workload irrespective of dataset size and thereby simplifying the annotation process. Our model introduces a few-shot localization framework capable of localizing any target anatomical part within the body. To achieve this, we develop a Localize Anything Model for 3D Medical Images (MedLAM), utilizing two self-supervision tasks: relative distance regression (RDR) and multi-scale similarity (MSS) across a comprehensive dataset of 14,012 CT scans. We then establish a methodology for accurate segmentation by integrating MedLAM with SAM. By annotating only six extreme points across three directions on a few templates, our model can autonomously identify the target anatomical region on all data scheduled for annotation. This allows our framework to generate a 2D bounding box for every slice of the image, which are then leveraged by SAM to carry out segmentations. We conducted experiments on two 3D datasets covering 38 organs and found that MedLSAM matches the performance of SAM and its medical adaptations while requiring only minimal extreme point annotations for the entire dataset. Furthermore, MedLAM has the potential to be seamlessly integrated with future 3D SAM models, paving the way for enhanced performance. Our code is public at https://github.com/openmedlab/MedLSAM.

* Work in Progress. Code is public at https://github.com/openmedlab/MedLSAM 
Viaarxiv icon

KiUT: Knowledge-injected U-Transformer for Radiology Report Generation

Jun 20, 2023
Zhongzhen Huang, Xiaofan Zhang, Shaoting Zhang

Figure 1 for KiUT: Knowledge-injected U-Transformer for Radiology Report Generation
Figure 2 for KiUT: Knowledge-injected U-Transformer for Radiology Report Generation
Figure 3 for KiUT: Knowledge-injected U-Transformer for Radiology Report Generation
Figure 4 for KiUT: Knowledge-injected U-Transformer for Radiology Report Generation

Radiology report generation aims to automatically generate a clinically accurate and coherent paragraph from the X-ray image, which could relieve radiologists from the heavy burden of report writing. Although various image caption methods have shown remarkable performance in the natural image field, generating accurate reports for medical images requires knowledge of multiple modalities, including vision, language, and medical terminology. We propose a Knowledge-injected U-Transformer (KiUT) to learn multi-level visual representation and adaptively distill the information with contextual and clinical knowledge for word prediction. In detail, a U-connection schema between the encoder and decoder is designed to model interactions between different modalities. And a symptom graph and an injected knowledge distiller are developed to assist the report generation. Experimentally, we outperform state-of-the-art methods on two widely used benchmark datasets: IU-Xray and MIMIC-CXR. Further experimental results prove the advantages of our architecture and the complementary benefits of the injected knowledge.

Viaarxiv icon

MidMed: Towards Mixed-Type Dialogues for Medical Consultation

Jun 14, 2023
Xiaoming Shi, Zeming Liu, Chuan Wang, Haitao Leng, Kui Xue, Xiaofan Zhang, Shaoting Zhang

Figure 1 for MidMed: Towards Mixed-Type Dialogues for Medical Consultation
Figure 2 for MidMed: Towards Mixed-Type Dialogues for Medical Consultation
Figure 3 for MidMed: Towards Mixed-Type Dialogues for Medical Consultation
Figure 4 for MidMed: Towards Mixed-Type Dialogues for Medical Consultation

Most medical dialogue systems assume that patients have clear goals (medicine querying, surgical operation querying, etc.) before medical consultation. However, in many real scenarios, due to the lack of medical knowledge, it is usually difficult for patients to determine clear goals with all necessary slots. In this paper, we identify this challenge as how to construct medical consultation dialogue systems to help patients clarify their goals. To mitigate this challenge, we propose a novel task and create a human-to-human mixed-type medical consultation dialogue corpus, termed MidMed, covering five dialogue types: task-oriented dialogue for diagnosis, recommendation, knowledge-grounded dialogue, QA, and chitchat. MidMed covers four departments (otorhinolaryngology, ophthalmology, skin, and digestive system), with 8,175 dialogues. Furthermore, we build baselines on MidMed and propose an instruction-guiding medical dialogue generation framework, termed InsMed, to address this task. Experimental results show the effectiveness of InsMed.

* Accepted by ACL 2023 main conference. The first two authors contributed equally to this work 
Viaarxiv icon

Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization

Jun 08, 2023
Clemens JS Schaefer, Navid Lambert-Shirzad, Xiaofan Zhang, Chiachen Chou, Tom Jablin, Jian Li, Elfie Guo, Caitlin Stanton, Siddharth Joshi, Yu Emma Wang

Figure 1 for Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization
Figure 2 for Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization
Figure 3 for Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization
Figure 4 for Augmenting Hessians with Inter-Layer Dependencies for Mixed-Precision Post-Training Quantization

Efficiently serving neural network models with low latency is becoming more challenging due to increasing model complexity and parameter count. Model quantization offers a solution which simultaneously reduces memory footprint and compute requirements. However, aggressive quantization may lead to an unacceptable loss in model accuracy owing to differences in sensitivity to numerical imperfection across different layers in the model. To address this challenge, we propose a mixed-precision post training quantization (PTQ) approach that assigns different numerical precisions to tensors in a network based on their specific needs, for a reduced memory footprint and improved latency while preserving model accuracy. Previous works rely on layer-wise Hessian information to determine numerical precision, but as we demonstrate, Hessian estimation is typically insufficient in determining an effective ordering of layer sensitivities. We address this by augmenting the estimated Hessian with additional information to capture inter-layer dependencies. We demonstrate that this consistently improves PTQ performance along the accuracy-latency Pareto frontier across multiple models. Our method combines second-order information and inter-layer dependencies to guide a bisection search, finding quantization configurations within a user-configurable model accuracy degradation range. We evaluate the effectiveness of our method on the ResNet50, MobileNetV2, and BERT models. Our experiments demonstrate latency reductions compared to a 16-bit baseline of $25.48\%$, $21.69\%$, and $33.28\%$ respectively, while maintaining model accuracy to within $99.99\%$ of the baseline model.

Viaarxiv icon

Mixed Precision Post Training Quantization of Neural Networks with Sensitivity Guided Search

Feb 07, 2023
Clemens JS Schaefer, Elfie Guo, Caitlin Stanton, Xiaofan Zhang, Tom Jablin, Navid Lambert-Shirzad, Jian Li, Chiachen Chou, Siddharth Joshi, Yu Emma Wang

Figure 1 for Mixed Precision Post Training Quantization of Neural Networks with Sensitivity Guided Search
Figure 2 for Mixed Precision Post Training Quantization of Neural Networks with Sensitivity Guided Search
Figure 3 for Mixed Precision Post Training Quantization of Neural Networks with Sensitivity Guided Search
Figure 4 for Mixed Precision Post Training Quantization of Neural Networks with Sensitivity Guided Search

Serving large-scale machine learning (ML) models efficiently and with low latency has become challenging owing to increasing model size and complexity. Quantizing models can simultaneously reduce memory and compute requirements, facilitating their widespread access. However, for large models not all layers are equally amenable to the same numerical precision and aggressive quantization can lead to unacceptable loss in model accuracy. One approach to prevent this accuracy degradation is mixed-precision quantization, which allows different tensors to be quantized to varying levels of numerical precision, leveraging the capabilities of modern hardware. Such mixed-precision quantiztaion can more effectively allocate numerical precision to different tensors `as needed' to preserve model accuracy while reducing footprint and compute latency. In this paper, we propose a method to efficiently determine quantization configurations of different tensors in ML models using post-training mixed precision quantization. We analyze three sensitivity metrics and evaluate them for guiding configuration search of two algorithms. We evaluate our method for computer vision and natural language processing and demonstrate latency reductions of up to 27.59% and 34.31% compared to the baseline 16-bit floating point model while guaranteeing no more than 1% accuracy degradation.

Viaarxiv icon