Alert button
Picture for Wenqi Lu

Wenqi Lu

Alert button

Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D Medical Image Registration

Jul 06, 2023
Xi Jia, Alexander Thorley, Alberto Gomez, Wenqi Lu, Dipak Kotecha, Jinming Duan

Figure 1 for Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D Medical Image Registration
Figure 2 for Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D Medical Image Registration
Figure 3 for Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D Medical Image Registration
Figure 4 for Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D Medical Image Registration

U-Net style networks are commonly utilized in unsupervised image registration to predict dense displacement fields, which for high-resolution volumetric image data is a resource-intensive and time-consuming task. To tackle this challenge, we first propose Fourier-Net, which replaces the costly U-Net style expansive path with a parameter-free model-driven decoder. Instead of directly predicting a full-resolution displacement field, our Fourier-Net learns a low-dimensional representation of the displacement field in the band-limited Fourier domain which our model-driven decoder converts to a full-resolution displacement field in the spatial domain. Expanding upon Fourier-Net, we then introduce Fourier-Net+, which additionally takes the band-limited spatial representation of the images as input and further reduces the number of convolutional layers in the U-Net style network's contracting path. Finally, to enhance the registration performance, we propose a cascaded version of Fourier-Net+. We evaluate our proposed methods on three datasets, on which our proposed Fourier-Net and its variants achieve comparable results with current state-of-the art methods, while exhibiting faster inference speeds, lower memory footprint, and fewer multiply-add operations. With such small computational cost, our Fourier-Net+ enables the efficient training of large-scale 3D registration on low-VRAM GPUs. Our code is publicly available at \url{https://github.com/xi-jia/Fourier-Net}.

* Under review. arXiv admin note: text overlap with arXiv:2211.16342 
Viaarxiv icon

Fourier-Net: Fast Image Registration with Band-limited Deformation

Nov 29, 2022
Xi Jia, Joseph Bartlett, Wei Chen, Siyang Song, Tianyang Zhang, Xinxing Cheng, Wenqi Lu, Zhaowen Qiu, Jinming Duan

Figure 1 for Fourier-Net: Fast Image Registration with Band-limited Deformation
Figure 2 for Fourier-Net: Fast Image Registration with Band-limited Deformation
Figure 3 for Fourier-Net: Fast Image Registration with Band-limited Deformation
Figure 4 for Fourier-Net: Fast Image Registration with Band-limited Deformation

Unsupervised image registration commonly adopts U-Net style networks to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource intensive and time-consuming. To tackle this problem, we propose the Fourier-Net, replacing the expansive path in a U-Net style network with a parameter-free model-driven decoder. Specifically, instead of our Fourier-Net learning to output a full-resolution displacement field in the spatial domain, we learn its low-dimensional representation in a band-limited Fourier domain. This representation is then decoded by our devised model-driven decoder (consisting of a zero padding layer and an inverse discrete Fourier transform layer) to the dense, full-resolution displacement field in the spatial domain. These changes allow our unsupervised Fourier-Net to contain fewer parameters and computational operations, resulting in faster inference speeds. Fourier-Net is then evaluated on two public 3D brain datasets against various state-of-the-art approaches. For example, when compared to a recent transformer-based method, i.e., TransMorph, our Fourier-Net, only using 0.22$\%$ of its parameters and 6.66$\%$ of the mult-adds, achieves a 0.6\% higher Dice score and an 11.48$\times$ faster inference speed. Code is available at \url{https://github.com/xi-jia/Fourier-Net}.

* This version was submitted to and accepted by AAAI 2023. (Some of) The content will be changed according to the reviewers' comments 
Viaarxiv icon

U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?

Aug 13, 2022
Xi Jia, Joseph Bartlett, Tianyang Zhang, Wenqi Lu, Zhaowen Qiu, Jinming Duan

Figure 1 for U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?
Figure 2 for U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?
Figure 3 for U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?
Figure 4 for U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?

Due to their extreme long-range modeling capability, vision transformer-based networks have become increasingly popular in deformable image registration. We believe, however, that the receptive field of a 5-layer convolutional U-Net is sufficient to capture accurate deformations without needing long-range dependencies. The purpose of this study is therefore to investigate whether U-Net-based methods are outdated compared to modern transformer-based approaches when applied to medical image registration. For this, we propose a large kernel U-Net (LKU-Net) by embedding a parallel convolutional block to a vanilla U-Net in order to enhance the effective receptive field. On the public 3D IXI brain dataset for atlas-based registration, we show that the performance of the vanilla U-Net is already comparable with that of state-of-the-art transformer-based networks (such as TransMorph), and that the proposed LKU-Net outperforms TransMorph by using only 1.12% of its parameters and 10.8% of its mult-adds operations. We further evaluate LKU-Net on a MICCAI Learn2Reg 2021 challenge dataset for inter-subject registration, our LKU-Net also outperforms TransMorph on this dataset and ranks first on the public leaderboard as of the submission of this work. With only modest modifications to the vanilla U-Net, we show that U-Net can outperform transformer-based architectures on inter-subject and atlas-based 3D medical image registration. Code is available at https://github.com/xi-jia/LKU-Net.

* Accepted to MICCAI-MLMI 2022 
Viaarxiv icon

SlideGraph+: Whole Slide Image Level Graphs to Predict HER2Status in Breast Cancer

Oct 12, 2021
Wenqi Lu, Michael Toss, Emad Rakha, Nasir Rajpoot, Fayyaz Minhas

Figure 1 for SlideGraph+: Whole Slide Image Level Graphs to Predict HER2Status in Breast Cancer
Figure 2 for SlideGraph+: Whole Slide Image Level Graphs to Predict HER2Status in Breast Cancer
Figure 3 for SlideGraph+: Whole Slide Image Level Graphs to Predict HER2Status in Breast Cancer
Figure 4 for SlideGraph+: Whole Slide Image Level Graphs to Predict HER2Status in Breast Cancer

Human epidermal growth factor receptor 2 (HER2) is an important prognostic and predictive factor which is overexpressed in 15-20% of breast cancer (BCa). The determination of its status is a key clinical decision making step for selection of treatment regimen and prognostication. HER2 status is evaluated using transcroptomics or immunohistochemistry (IHC) through situ hybridisation (ISH) which require additional costs and tissue burden in addition to analytical variabilities in terms of manual observational biases in scoring. In this study, we propose a novel graph neural network (GNN) based model (termed SlideGraph+) to predict HER2 status directly from whole-slide images of routine Haematoxylin and Eosin (H&E) slides. The network was trained and tested on slides from The Cancer Genome Atlas (TCGA) in addition to two independent test datasets. We demonstrate that the proposed model outperforms the state-of-the-art methods with area under the ROC curve (AUC) values > 0.75 on TCGA and 0.8 on independent test sets. Our experiments show that the proposed approach can be utilised for case triaging as well as pre-ordering diagnostic tests in a diagnostic setting. It can also be used for other weakly supervised prediction problems in computational pathology. The SlideGraph+ code is available at https://github.com/wenqi006/SlideGraph.

* 20 pages, 11 figures, 3 tables 
Viaarxiv icon

Semantic annotation for computational pathology: Multidisciplinary experience and best practice recommendations

Jun 25, 2021
Noorul Wahab, Islam M Miligy, Katherine Dodd, Harvir Sahota, Michael Toss, Wenqi Lu, Mostafa Jahanifar, Mohsin Bilal, Simon Graham, Young Park, Giorgos Hadjigeorghiou, Abhir Bhalerao, Ayat Lashen, Asmaa Ibrahim, Ayaka Katayama, Henry O Ebili, Matthew Parkin, Tom Sorell, Shan E Ahmed Raza, Emily Hero, Hesham Eldaly, Yee Wah Tsang, Kishore Gopalakrishnan, David Snead, Emad Rakha, Nasir Rajpoot, Fayyaz Minhas

Figure 1 for Semantic annotation for computational pathology: Multidisciplinary experience and best practice recommendations
Figure 2 for Semantic annotation for computational pathology: Multidisciplinary experience and best practice recommendations
Figure 3 for Semantic annotation for computational pathology: Multidisciplinary experience and best practice recommendations
Figure 4 for Semantic annotation for computational pathology: Multidisciplinary experience and best practice recommendations

Recent advances in whole slide imaging (WSI) technology have led to the development of a myriad of computer vision and artificial intelligence (AI) based diagnostic, prognostic, and predictive algorithms. Computational Pathology (CPath) offers an integrated solution to utilize information embedded in pathology WSIs beyond what we obtain through visual assessment. For automated analysis of WSIs and validation of machine learning (ML) models, annotations at the slide, tissue and cellular levels are required. The annotation of important visual constructs in pathology images is an important component of CPath projects. Improper annotations can result in algorithms which are hard to interpret and can potentially produce inaccurate and inconsistent results. Despite the crucial role of annotations in CPath projects, there are no well-defined guidelines or best practices on how annotations should be carried out. In this paper, we address this shortcoming by presenting the experience and best practices acquired during the execution of a large-scale annotation exercise involving a multidisciplinary team of pathologists, ML experts and researchers as part of the Pathology image data Lake for Analytics, Knowledge and Education (PathLAKE) consortium. We present a real-world case study along with examples of different types of annotations, diagnostic algorithm, annotation data dictionary and annotation constructs. The analyses reported in this work highlight best practice recommendations that can be used as annotation guidelines over the lifecycle of a CPath project.

Viaarxiv icon

A new nonlocal forward model for diffuse optical tomography

Jun 03, 2019
Wenqi Lu, Jinming Duan, Joshua Deepak Veesa, Iain B. Styles

Figure 1 for A new nonlocal forward model for diffuse optical tomography
Figure 2 for A new nonlocal forward model for diffuse optical tomography
Figure 3 for A new nonlocal forward model for diffuse optical tomography
Figure 4 for A new nonlocal forward model for diffuse optical tomography

The forward model in diffuse optical tomography (DOT) describes how light propagates through a turbid medium. It is often approximated by a diffusion equation (DE) that is numerically discretized by the classical finite element method (FEM). We propose a nonlocal diffusion equation (NDE) as a new forward model for DOT, the discretization of which is carried out with an efficient graph-based numerical method (GNM). To quantitatively evaluate the new forward model, we first conduct experiments on a homogeneous slab, where the numerical accuracy of both NDE and DE is compared against the existing analytical solution. We further evaluate NDE by comparing its image reconstruction performance (inverse problem) to that of DE. Our experiments show that NDE is quantitatively comparable to DE and is up to 64% faster due to the efficient graph-based representation that can be implemented identically for geometries in different dimensions.

* 7 pages, 9 figures 
Viaarxiv icon

Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography

Jan 07, 2019
Wenqi Lu, Jinming Duan, David Orive-Miguel, Lionel Herve, Iain B Styles

Figure 1 for Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography
Figure 2 for Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography
Figure 3 for Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography
Figure 4 for Graph- and finite element-based total variation models for the inverse problem in diffuse optical tomography

Total variation (TV) is a powerful regularization method that has been widely applied in different imaging applications, but is difficult to apply to diffuse optical tomography (DOT) image reconstruction (inverse problem) due to complex and unstructured geometries, non-linearity of the data fitting and regularization terms, and non-differentiability of the regularization term. We develop several approaches to overcome these difficulties by: i) defining discrete differential operators for unstructured geometries using both finite element and graph representations; ii) developing an optimization algorithm based on the alternating direction method of multipliers (ADMM) for the non-differentiable and non-linear minimization problem; iii) investigating isotropic and anisotropic variants of TV regularization, and comparing their finite element- and graph-based implementations. These approaches are evaluated on experiments on simulated data and real data acquired from a tissue phantom. Our results show that both FEM and graph-based TV regularization is able to accurately reconstruct both sparse and non-sparse distributions without the over-smoothing effect of Tikhonov regularization and the over-sparsifying effect of L$_1$ regularization. The graph representation was found to out-perform the FEM method for low-resolution meshes, and the FEM method was found to be more accurate for high-resolution meshes.

* 23 pages, 12 figures 
Viaarxiv icon