Alert button
Picture for Qing Wu

Qing Wu

Alert button

Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning

Jul 16, 2023
Hongyu Ding, Yuanze Tang, Qing Wu, Bo Wang, Chunlin Chen, Zhi Wang

Figure 1 for Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning
Figure 2 for Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning
Figure 3 for Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning
Figure 4 for Magnetic Field-Based Reward Shaping for Goal-Conditioned Reinforcement Learning

Goal-conditioned reinforcement learning (RL) is an interesting extension of the traditional RL framework, where the dynamic environment and reward sparsity can cause conventional learning algorithms to fail. Reward shaping is a practical approach to improving sample efficiency by embedding human domain knowledge into the learning process. Existing reward shaping methods for goal-conditioned RL are typically built on distance metrics with a linear and isotropic distribution, which may fail to provide sufficient information about the ever-changing environment with high complexity. This paper proposes a novel magnetic field-based reward shaping (MFRS) method for goal-conditioned RL tasks with dynamic target and obstacles. Inspired by the physical properties of magnets, we consider the target and obstacles as permanent magnets and establish the reward function according to the intensity values of the magnetic field generated by these magnets. The nonlinear and anisotropic distribution of the magnetic field intensity can provide more accessible and conducive information about the optimization landscape, thus introducing a more sophisticated magnetic reward compared to the distance-based setting. Further, we transform our magnetic reward to the form of potential-based reward shaping by learning a secondary potential function concurrently to ensure the optimal policy invariance of our method. Experiments results in both simulated and real-world robotic manipulation tasks demonstrate that MFRS outperforms relevant existing methods and effectively improves the sample efficiency of RL algorithms in goal-conditioned tasks with various dynamics of the target and obstacles.

* Accepted by IEEE-CAA Journal of Automatica Sinica, 2023, DOI: 10.1109/JAS.2023.123477 
Viaarxiv icon

Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction

Jun 27, 2023
Qing Wu, Lixuan Chen, Ce Wang, Hongjiang Wei, S. Kevin Zhou, Jingyi Yu, Yuyao Zhang

Figure 1 for Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction
Figure 2 for Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction
Figure 3 for Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction
Figure 4 for Unsupervised Polychromatic Neural Representation for CT Metal Artifact Reduction

Emerging neural reconstruction techniques based on tomography (e.g., NeRF, NeAT, and NeRP) have started showing unique capabilities in medical imaging. In this work, we present a novel Polychromatic neural representation (Polyner) to tackle the challenging problem of CT imaging when metallic implants exist within the human body. The artifacts arise from the drastic variation of metal's attenuation coefficients at various energy levels of the X-ray spectrum, leading to a nonlinear metal effect in CT measurements. Reconstructing CT images from metal-affected measurements hence poses a complicated nonlinear inverse problem where empirical models adopted in previous metal artifact reduction (MAR) approaches lead to signal loss and strongly aliased reconstructions. Polyner instead models the MAR problem from a nonlinear inverse problem perspective. Specifically, we first derive a polychromatic forward model to accurately simulate the nonlinear CT acquisition process. Then, we incorporate our forward model into the implicit neural representation to accomplish reconstruction. Lastly, we adopt a regularizer to preserve the physical properties of the CT images across different energy levels while effectively constraining the solution space. Our Polyner is an unsupervised method and does not require any external training data. Experimenting with multiple datasets shows that our Polyner achieves comparable or better performance than supervised methods on in-domain datasets while demonstrating significant performance improvements on out-of-domain datasets. To the best of our knowledge, our Polyner is the first unsupervised MAR method that outperforms its supervised counterparts.

* 19 pages 
Viaarxiv icon

Self-supervised arbitrary scale super-resolution framework for anisotropic MRI

May 02, 2023
Haonan Zhang, Yuhan Zhang, Qing Wu, Jiangjie Wu, Zhiming Zhen, Feng Shi, Jianmin Yuan, Hongjiang Wei, Chen Liu, Yuyao Zhang

Figure 1 for Self-supervised arbitrary scale super-resolution framework for anisotropic MRI
Figure 2 for Self-supervised arbitrary scale super-resolution framework for anisotropic MRI
Figure 3 for Self-supervised arbitrary scale super-resolution framework for anisotropic MRI
Figure 4 for Self-supervised arbitrary scale super-resolution framework for anisotropic MRI

In this paper, we propose an efficient self-supervised arbitrary-scale super-resolution (SR) framework to reconstruct isotropic magnetic resonance (MR) images from anisotropic MRI inputs without involving external training data. The proposed framework builds a training dataset using in-the-wild anisotropic MR volumes with arbitrary image resolution. We then formulate the 3D volume SR task as a SR problem for 2D image slices. The anisotropic volume's high-resolution (HR) plane is used to build the HR-LR image pairs for model training. We further adapt the implicit neural representation (INR) network to implement the 2D arbitrary-scale image SR model. Finally, we leverage the well-trained proposed model to up-sample the 2D LR plane extracted from the anisotropic MR volumes to their HR views. The isotropic MR volumes thus can be reconstructed by stacking and averaging the generated HR slices. Our proposed framework has two major advantages: (1) It only involves the arbitrary-resolution anisotropic MR volumes, which greatly improves the model practicality in real MR imaging scenarios (e.g., clinical brain image acquisition); (2) The INR-based SR model enables arbitrary-scale image SR from the arbitrary-resolution input image, which significantly improves model training efficiency. We perform experiments on a simulated public adult brain dataset and a real collected 7T brain dataset. The results indicate that our current framework greatly outperforms two well-known self-supervised models for anisotropic MR image SR tasks.

* 10 pages, 5 figures 
Viaarxiv icon

Spatiotemporal implicit neural representation for unsupervised dynamic MRI reconstruction

Dec 31, 2022
Jie Feng, Ruimin Feng, Qing Wu, Zhiyong Zhang, Yuyao Zhang, Hongjiang Wei

Figure 1 for Spatiotemporal implicit neural representation for unsupervised dynamic MRI reconstruction
Figure 2 for Spatiotemporal implicit neural representation for unsupervised dynamic MRI reconstruction
Figure 3 for Spatiotemporal implicit neural representation for unsupervised dynamic MRI reconstruction
Figure 4 for Spatiotemporal implicit neural representation for unsupervised dynamic MRI reconstruction

Supervised Deep-Learning (DL)-based reconstruction algorithms have shown state-of-the-art results for highly-undersampled dynamic Magnetic Resonance Imaging (MRI) reconstruction. However, the requirement of excessive high-quality ground-truth data hinders their applications due to the generalization problem. Recently, Implicit Neural Representation (INR) has appeared as a powerful DL-based tool for solving the inverse problem by characterizing the attributes of a signal as a continuous function of corresponding coordinates in an unsupervised manner. In this work, we proposed an INR-based method to improve dynamic MRI reconstruction from highly undersampled k-space data, which only takes spatiotemporal coordinates as inputs. Specifically, the proposed INR represents the dynamic MRI images as an implicit function and encodes them into neural networks. The weights of the network are learned from sparsely-acquired (k, t)-space data itself only, without external training datasets or prior images. Benefiting from the strong implicit continuity regularization of INR together with explicit regularization for low-rankness and sparsity, our proposed method outperforms the compared scan-specific methods at various acceleration factors. E.g., experiments on retrospective cardiac cine datasets show an improvement of 5.5 ~ 7.1 dB in PSNR for extremely high accelerations (up to 41.6-fold). The high-quality and inner continuity of the images provided by INR has great potential to further improve the spatiotemporal resolution of dynamic MRI, without the need of any training data.

* 9 pages, 5 figures 
Viaarxiv icon

Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating Neural Field

Nov 06, 2022
Qing Wu, Xin Li, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

Figure 1 for Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating Neural Field
Figure 2 for Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating Neural Field
Figure 3 for Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating Neural Field
Figure 4 for Joint Rigid Motion Correction and Sparse-View CT via Self-Calibrating Neural Field

Neural Radiance Field (NeRF) has widely received attention in Sparse-View Computed Tomography (SVCT) reconstruction tasks as a self-supervised deep learning framework. NeRF-based SVCT methods represent the desired CT image as a continuous function of spatial coordinates and train a Multi-Layer Perceptron (MLP) to learn the function by minimizing loss on the SV sinogram. Benefiting from the continuous representation provided by NeRF, the high-quality CT image can be reconstructed. However, existing NeRF-based SVCT methods strictly suppose there is completely no relative motion during the CT acquisition because they require \textit{accurate} projection poses to model the X-rays that scan the SV sinogram. Therefore, these methods suffer from severe performance drops for real SVCT imaging with motion. In this work, we propose a self-calibrating neural field to recover the artifacts-free image from the rigid motion-corrupted SV sinogram without using any external data. Specifically, we parametrize the inaccurate projection poses caused by rigid motion as trainable variables and then jointly optimize these pose variables and the MLP. We conduct numerical experiments on a public CT image dataset. The results indicate our model significantly outperforms two representative NeRF-based methods for SVCT reconstruction tasks with four different levels of rigid motion.

* 5 pages 
Viaarxiv icon

A scan-specific unsupervised method for parallel MRI reconstruction via implicit neural representation

Oct 19, 2022
Ruimin Feng, Qing Wu, Yuyao Zhang, Hongjiang Wei

Figure 1 for A scan-specific unsupervised method for parallel MRI reconstruction via implicit neural representation
Figure 2 for A scan-specific unsupervised method for parallel MRI reconstruction via implicit neural representation
Figure 3 for A scan-specific unsupervised method for parallel MRI reconstruction via implicit neural representation
Figure 4 for A scan-specific unsupervised method for parallel MRI reconstruction via implicit neural representation

Parallel imaging is a widely-used technique to accelerate magnetic resonance imaging (MRI). However, current methods still perform poorly in reconstructing artifact-free MRI images from highly undersampled k-space data. Recently, implicit neural representation (INR) has emerged as a new deep learning paradigm for learning the internal continuity of an object. In this study, we adopted INR to parallel MRI reconstruction. The MRI image was modeled as a continuous function of spatial coordinates. This function was parameterized by a neural network and learned directly from the measured k-space itself without additional fully sampled high-quality training data. Benefitting from the powerful continuous representations provided by INR, the proposed method outperforms existing methods by suppressing the aliasing artifacts and noise, especially at higher acceleration rates and smaller sizes of the auto-calibration signals. The high-quality results and scanning specificity make the proposed method hold the potential for further accelerating the data acquisition of parallel MRI.

* conference 
Viaarxiv icon

Continuous longitudinal fetus brain atlas construction via implicit neural representation

Sep 14, 2022
Lixuan Chen, Jiangjie Wu, Qing Wu, Hongjiang Wei, Yuyao Zhang

Figure 1 for Continuous longitudinal fetus brain atlas construction via implicit neural representation
Figure 2 for Continuous longitudinal fetus brain atlas construction via implicit neural representation
Figure 3 for Continuous longitudinal fetus brain atlas construction via implicit neural representation
Figure 4 for Continuous longitudinal fetus brain atlas construction via implicit neural representation

Longitudinal fetal brain atlas is a powerful tool for understanding and characterizing the complex process of fetus brain development. Existing fetus brain atlases are typically constructed by averaged brain images on discrete time points independently over time. Due to the differences in onto-genetic trends among samples at different time points, the resulting atlases suffer from temporal inconsistency, which may lead to estimating error of the brain developmental characteristic parameters along the timeline. To this end, we proposed a multi-stage deep-learning framework to tackle the time inconsistency issue as a 4D (3D brain volume + 1D age) image data denoising task. Using implicit neural representation, we construct a continuous and noise-free longitudinal fetus brain atlas as a function of the 4D spatial-temporal coordinate. Experimental results on two public fetal brain atlases (CRL and FBA-Chinese atlases) show that the proposed method can significantly improve the atlas temporal consistency while maintaining good fetus brain structure representation. In addition, the continuous longitudinal fetus brain atlases can also be extensively applied to generate finer 4D atlases in both spatial and temporal resolution.

* 11 pages, 4 figures 
Viaarxiv icon

Noise2SR: Learning to Denoise from Super-Resolved Single Noisy Fluorescence Image

Sep 14, 2022
Xuanyu Tian, Qing Wu, Hongjiang Wei, Yuyao Zhang

Figure 1 for Noise2SR: Learning to Denoise from Super-Resolved Single Noisy Fluorescence Image
Figure 2 for Noise2SR: Learning to Denoise from Super-Resolved Single Noisy Fluorescence Image
Figure 3 for Noise2SR: Learning to Denoise from Super-Resolved Single Noisy Fluorescence Image
Figure 4 for Noise2SR: Learning to Denoise from Super-Resolved Single Noisy Fluorescence Image

Fluorescence microscopy is a key driver to promote discoveries of biomedical research. However, with the limitation of microscope hardware and characteristics of the observed samples, the fluorescence microscopy images are susceptible to noise. Recently, a few self-supervised deep learning (DL) denoising methods have been proposed. However, the training efficiency and denoising performance of existing methods are relatively low in real scene noise removal. To address this issue, this paper proposed self-supervised image denoising method Noise2SR (N2SR) to train a simple and effective image denoising model based on single noisy observation. Our Noise2SR denoising model is designed for training with paired noisy images of different dimensions. Benefiting from this training strategy, Noise2SR is more efficiently self-supervised and able to restore more image details from a single noisy observation. Experimental results of simulated noise and real microscopy noise removal show that Noise2SR outperforms two blind-spot based self-supervised deep learning image denoising methods. We envision that Noise2SR has the potential to improve more other kind of scientific imaging quality.

* MICCAI 2022  
* 12 pages, 6 figures 
Viaarxiv icon

Self-Supervised Coordinate Projection Network for Sparse-View Computed Tomography

Sep 12, 2022
Qing Wu, Ruimin Feng, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

Figure 1 for Self-Supervised Coordinate Projection Network for Sparse-View Computed Tomography
Figure 2 for Self-Supervised Coordinate Projection Network for Sparse-View Computed Tomography
Figure 3 for Self-Supervised Coordinate Projection Network for Sparse-View Computed Tomography
Figure 4 for Self-Supervised Coordinate Projection Network for Sparse-View Computed Tomography

In the present work, we propose a Self-supervised COordinate Projection nEtwork (SCOPE) to reconstruct the artifacts-free CT image from a single SV sinogram by solving the inverse tomography imaging problem. Compared with recent related works that solve similar problems using implicit neural representation network (INR), our essential contribution is an effective and simple re-projection strategy that pushes the tomography image reconstruction quality over supervised deep learning CT reconstruction works. The proposed strategy is inspired by the simple relationship between linear algebra and inverse problems. To solve the under-determined linear equation system, we first introduce INR to constrain the solution space via image continuity prior and achieve a rough solution. And secondly, we propose to generate a dense view sinogram that improves the rank of the linear equation system and produces a more stable CT image solution space. Our experiment results demonstrate that the re-projection strategy significantly improves the image reconstruction quality (+3 dB for PSNR at least). Besides, we integrate the recent hash encoding into our SCOPE model, which greatly accelerates the model training. Finally, we evaluate SCOPE in parallel and fan X-ray beam SVCT reconstruction tasks. Experimental results indicate that the proposed SCOPE model outperforms two latest INR-based methods and two well-popular supervised DL methods quantitatively and qualitatively.

* 10 pages, 11 figures, and 6 tables 
Viaarxiv icon

An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation

Oct 29, 2021
Qing Wu, Yuwei Li, Yawen Sun, Yan Zhou, Hongjiang Wei, Jingyi Yu, Yuyao Zhang

Figure 1 for An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation
Figure 2 for An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation
Figure 3 for An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation
Figure 4 for An Arbitrary Scale Super-Resolution Approach for 3-Dimensional Magnetic Resonance Image using Implicit Neural Representation

High Resolution (HR) medical images provide rich anatomical structure details to facilitate early and accurate diagnosis. In MRI, restricted by hardware capacity, scan time, and patient cooperation ability, isotropic 3D HR image acquisition typically requests long scan time and, results in small spatial coverage and low SNR. Recent studies showed that, with deep convolutional neural networks, isotropic HR MR images could be recovered from low-resolution (LR) input via single image super-resolution (SISR) algorithms. However, most existing SISR methods tend to approach a scale-specific projection between LR and HR images, thus these methods can only deal with a fixed up-sampling rate. For achieving different up-sampling rates, multiple SR networks have to be built up respectively, which is very time-consuming and resource-intensive. In this paper, we propose ArSSR, an Arbitrary Scale Super-Resolution approach for recovering 3D HR MR images. In the ArSSR model, the reconstruction of HR images with different up-scaling rates is defined as learning a continuous implicit voxel function from the observed LR images. Then the SR task is converted to represent the implicit voxel function via deep neural networks from a set of paired HR-LR training examples. The ArSSR model consists of an encoder network and a decoder network. Specifically, the convolutional encoder network is to extract feature maps from the LR input images and the fully-connected decoder network is to approximate the implicit voxel function. Due to the continuity of the learned function, a single ArSSR model can achieve arbitrary up-sampling rate reconstruction of HR images from any input LR image after training. Experimental results on three datasets show that the ArSSR model can achieve state-of-the-art SR performance for 3D HR MR image reconstruction while using a single trained model to achieve arbitrary up-sampling scales.

* 18 pages, 13 figures, 4 tables; submitted to Medical Image Analysis 
Viaarxiv icon