Alert button
Picture for Xiaojian Xu

Xiaojian Xu

Alert button

AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval

May 12, 2023
Zongyu Li, Jason Hu, Xiaojian Xu, Liyue Shen, Jeffrey A. Fessler

Figure 1 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval
Figure 2 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval
Figure 3 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval
Figure 4 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval

Phase retrieval (PR) is an essential problem in a number of coherent imaging systems. This work aims at resolving the holographic phase retrieval problem in real world scenarios where the measurements are corrupted by a mixture of Poisson and Gaussian (PG) noise that stems from optical imaging systems. To solve this problem, we develop a novel algorithm based on Accelerated Wirtinger Flow that uses Score-based Diffusion models as the generative prior (AWFSD). In particular, we frame the PR problem as an optimization task that involves both a data fidelity term and a regularization term. We derive the gradient of the PG log-likelihood function along with its corresponding Lipschitz constant, ensuring a more accurate data consistency term for practical measurements. We introduce a generative prior as part of our regularization approach by using a score-based diffusion model to capture (the gradient of) the image prior distribution. We provide theoretical analysis that establishes a critical-point convergence guarantee for the proposed AWFSD algorithm. Our simulation experiments demonstrate that: 1) The proposed algorithm based on the PG likelihood model enhances reconstruction compared to that solely based on either Gaussian or Poisson likelihood. 2) The proposed AWFSD algorithm produces reconstructions with higher image quality both qualitatively and quantitatively, and is more robust to variations in noise levels when compared with state-of-the-art methods for phase retrieval.

Viaarxiv icon

CoRRECT: A Deep Unfolding Framework for Motion-Corrected Quantitative R2* Mapping

Oct 12, 2022
Xiaojian Xu, Weijie Gan, Satya V. V. N. Kothapalli, Dmitriy A. Yablonskiy, Ulugbek S. Kamilov

Figure 1 for CoRRECT: A Deep Unfolding Framework for Motion-Corrected Quantitative R2* Mapping
Figure 2 for CoRRECT: A Deep Unfolding Framework for Motion-Corrected Quantitative R2* Mapping
Figure 3 for CoRRECT: A Deep Unfolding Framework for Motion-Corrected Quantitative R2* Mapping
Figure 4 for CoRRECT: A Deep Unfolding Framework for Motion-Corrected Quantitative R2* Mapping

Quantitative MRI (qMRI) refers to a class of MRI methods for quantifying the spatial distribution of biological tissue parameters. Traditional qMRI methods usually deal separately with artifacts arising from accelerated data acquisition, involuntary physical motion, and magnetic-field inhomogeneities, leading to suboptimal end-to-end performance. This paper presents CoRRECT, a unified deep unfolding (DU) framework for qMRI consisting of a model-based end-to-end neural network, a method for motion-artifact reduction, and a self-supervised learning scheme. The network is trained to produce R2* maps whose k-space data matches the real data by also accounting for motion and field inhomogeneities. When deployed, CoRRECT only uses the k-space data without any pre-computed parameters for motion or inhomogeneity correction. Our results on experimentally collected multi-Gradient-Recalled Echo (mGRE) MRI data show that CoRRECT recovers motion and inhomogeneity artifact-free R2* maps in highly accelerated acquisition settings. This work opens the door to DU methods that can integrate physical measurement models, biophysical signal models, and learned prior models for high-quality qMRI.

Viaarxiv icon

Online Deep Equilibrium Learning for Regularization by Denoising

May 25, 2022
Jiaming Liu, Xiaojian Xu, Weijie Gan, Shirin Shoushtari, Ulugbek S. Kamilov

Figure 1 for Online Deep Equilibrium Learning for Regularization by Denoising
Figure 2 for Online Deep Equilibrium Learning for Regularization by Denoising
Figure 3 for Online Deep Equilibrium Learning for Regularization by Denoising
Figure 4 for Online Deep Equilibrium Learning for Regularization by Denoising

Plug-and-Play Priors (PnP) and Regularization by Denoising (RED) are widely-used frameworks for solving imaging inverse problems by computing fixed-points of operators combining physical measurement models and learned image priors. While traditional PnP/RED formulations have focused on priors specified using image denoisers, there is a growing interest in learning PnP/RED priors that are end-to-end optimal. The recent Deep Equilibrium Models (DEQ) framework has enabled memory-efficient end-to-end learning of PnP/RED priors by implicitly differentiating through the fixed-point equations without storing intermediate activation values. However, the dependence of the computational/memory complexity of the measurement models in PnP/RED on the total number of measurements leaves DEQ impractical for many imaging applications. We propose ODER as a new strategy for improving the efficiency of DEQ through stochastic approximations of the measurement models. We theoretically analyze ODER giving insights into its convergence and ability to approximate the traditional DEQ approach. Our numerical results suggest the potential improvements in training/testing complexity due to ODER on three distinct imaging applications.

* 28 pages, 8 figures 
Viaarxiv icon

Monotonically Convergent Regularization by Denoising

Feb 10, 2022
Yuyang Hu, Jiaming Liu, Xiaojian Xu, Ulugbek S. Kamilov

Figure 1 for Monotonically Convergent Regularization by Denoising
Figure 2 for Monotonically Convergent Regularization by Denoising
Figure 3 for Monotonically Convergent Regularization by Denoising
Figure 4 for Monotonically Convergent Regularization by Denoising

Regularization by denoising (RED) is a widely-used framework for solving inverse problems by leveraging image denoisers as image priors. Recent work has reported the state-of-the-art performance of RED in a number of imaging applications using pre-trained deep neural nets as denoisers. Despite the recent progress, the stable convergence of RED algorithms remains an open problem. The existing RED theory only guarantees stability for convex data-fidelity terms and nonexpansive denoisers. This work addresses this issue by developing a new monotone RED (MRED) algorithm, whose convergence does not require nonexpansiveness of the deep denoising prior. Simulations on image deblurring and compressive sensing recovery from random matrices show the stability of MRED even when the traditional RED algorithm diverges.

Viaarxiv icon

Bregman Plug-and-Play Priors

Feb 04, 2022
Abdullah H. Al-Shabili, Xiaojian Xu, Ivan Selesnick, Ulugbek S. Kamilov

Figure 1 for Bregman Plug-and-Play Priors
Figure 2 for Bregman Plug-and-Play Priors
Figure 3 for Bregman Plug-and-Play Priors

The past few years have seen a surge of activity around integration of deep learning networks and optimization algorithms for solving inverse problems. Recent work on plug-and-play priors (PnP), regularization by denoising (RED), and deep unfolding has shown the state-of-the-art performance of such integration in a variety of applications. However, the current paradigm for designing such algorithms is inherently Euclidean, due to the usage of the quadratic norm within the projection and proximal operators. We propose to broaden this perspective by considering a non-Euclidean setting based on the more general Bregman distance. Our new Bregman Proximal Gradient Method variant of PnP (PnP-BPGM) and Bregman Steepest Descent variant of RED (RED-BSD) replace the traditional updates in PnP and RED from the quadratic norms to more general Bregman distance. We present a theoretical convergence result for PnP-BPGM and demonstrate the effectiveness of our algorithms on Poisson linear inverse problems.

Viaarxiv icon

Learning-based Motion Artifact Removal Networks (LEARN) for Quantitative $R_2^\ast$ Mapping

Sep 03, 2021
Xiaojian Xu, Satya V. V. N. Kothapalli, Jiaming Liu, Sayan Kahali, Weijie Gan, Dmitriy A. Yablonskiy, Ulugbek S. Kamilov

Figure 1 for Learning-based Motion Artifact Removal Networks (LEARN) for Quantitative $R_2^\ast$ Mapping
Figure 2 for Learning-based Motion Artifact Removal Networks (LEARN) for Quantitative $R_2^\ast$ Mapping
Figure 3 for Learning-based Motion Artifact Removal Networks (LEARN) for Quantitative $R_2^\ast$ Mapping
Figure 4 for Learning-based Motion Artifact Removal Networks (LEARN) for Quantitative $R_2^\ast$ Mapping

Purpose: To introduce two novel learning-based motion artifact removal networks (LEARN) for the estimation of quantitative motion- and $B0$-inhomogeneity-corrected $R_2^\ast$ maps from motion-corrupted multi-Gradient-Recalled Echo (mGRE) MRI data. Methods: We train two convolutional neural networks (CNNs) to correct motion artifacts for high-quality estimation of quantitative $B0$-inhomogeneity-corrected $R_2^\ast$ maps from mGRE sequences. The first CNN, LEARN-IMG, performs motion correction on complex mGRE images, to enable the subsequent computation of high-quality motion-free quantitative $R_2^\ast$ (and any other mGRE-enabled) maps using the standard voxel-wise analysis or machine-learning-based analysis. The second CNN, LEARN-BIO, is trained to directly generate motion- and $B0$-inhomogeneity-corrected quantitative $R_2^\ast$ maps from motion-corrupted magnitude-only mGRE images by taking advantage of the biophysical model describing the mGRE signal decay. We show that both CNNs trained on synthetic MR images are capable of suppressing motion artifacts while preserving details in the predicted quantitative $R_2^\ast$ maps. Significant reduction of motion artifacts on experimental in vivo motion-corrupted data has also been achieved by using our trained models. Conclusion: Both LEARN-IMG and LEARN-BIO can enable the computation of high-quality motion- and $B0$-inhomogeneity-corrected $R_2^\ast$ maps. LEARN-IMG performs motion correction on mGRE images and relies on the subsequent analysis for the estimation of $R_2^\ast$ maps, while LEARN-BIO directly performs motion- and $B0$-inhomogeneity-corrected $R_2^\ast$ estimation. Both LEARN-IMG and LEARN-BIO jointly process all the available gradient echoes, which enables them to exploit spatial patterns available in the data. The high computational speed of LEARN-BIO is an advantage that can lead to a broader clinical application.

Viaarxiv icon

SGD-Net: Efficient Model-Based Deep Learning with Theoretical Guarantees

Jan 22, 2021
Jiaming Liu, Yu Sun, Weijie Gan, Xiaojian Xu, Brendt Wohlberg, Ulugbek S. Kamilov

Figure 1 for SGD-Net: Efficient Model-Based Deep Learning with Theoretical Guarantees
Figure 2 for SGD-Net: Efficient Model-Based Deep Learning with Theoretical Guarantees
Figure 3 for SGD-Net: Efficient Model-Based Deep Learning with Theoretical Guarantees
Figure 4 for SGD-Net: Efficient Model-Based Deep Learning with Theoretical Guarantees

Deep unfolding networks have recently gained popularity in the context of solving imaging inverse problems. However, the computational and memory complexity of data-consistency layers within traditional deep unfolding networks scales with the number of measurements, limiting their applicability to large-scale imaging inverse problems. We propose SGD-Net as a new methodology for improving the efficiency of deep unfolding through stochastic approximations of the data-consistency layers. Our theoretical analysis shows that SGD-Net can be trained to approximate batch deep unfolding networks to an arbitrary precision. Our numerical results on intensity diffraction tomography and sparse-view computed tomography show that SGD-Net can match the performance of the batch network at a fraction of training and testing complexity.

Viaarxiv icon

Image Restoration using Total Variation Regularized Deep Image Prior

Oct 30, 2018
Jiaming Liu, Yu Sun, Xiaojian Xu, Ulugbek S. Kamilov

Figure 1 for Image Restoration using Total Variation Regularized Deep Image Prior
Figure 2 for Image Restoration using Total Variation Regularized Deep Image Prior
Figure 3 for Image Restoration using Total Variation Regularized Deep Image Prior
Figure 4 for Image Restoration using Total Variation Regularized Deep Image Prior

In the past decade, sparsity-driven regularization has led to significant improvements in image reconstruction. Traditional regularizers, such as total variation (TV), rely on analytical models of sparsity. However, increasingly the field is moving towards trainable models, inspired from deep learning. Deep image prior (DIP) is a recent regularization framework that uses a convolutional neural network (CNN) architecture without data-driven training. This paper extends the DIP framework by combining it with the traditional TV regularization. We show that the inclusion of TV leads to considerable performance gains when tested on several traditional restoration tasks such as image denoising and deblurring.

Viaarxiv icon

signProx: One-Bit Proximal Algorithm for Nonconvex Stochastic Optimization

Oct 23, 2018
Xiaojian Xu, Ulugbek S. Kamilov

Figure 1 for signProx: One-Bit Proximal Algorithm for Nonconvex Stochastic Optimization

Stochastic gradient descent (SGD) is one of the most widely used optimization methods for parallel and distributed processing of large datasets. One of the key limitations of distributed SGD is the need to regularly communicate the gradients between different computation nodes. To reduce this communication bottleneck, recent work has considered a one-bit variant of SGD, where only the sign of each gradient element is used in optimization. In this paper, we extend this idea by proposing a stochastic variant of the proximal-gradient method that also uses one-bit per update element. We prove the theoretical convergence of the method for non-convex optimization under a set of explicit assumptions. Our results indicate that the compressed method can match the convergence rate of the uncompressed one, making the proposed method potentially appealing for distributed processing of large datasets.

Viaarxiv icon