Alert button
Picture for Jeffrey A. Fessler

Jeffrey A. Fessler

Alert button

ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization

Jul 06, 2023
Javier Salazar Cavazos, Jeffrey A. Fessler, Laura Balzano

Figure 1 for ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization
Figure 2 for ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization
Figure 3 for ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization
Figure 4 for ALPCAH: Sample-wise Heteroscedastic PCA with Tail Singular Value Regularization

Principal component analysis (PCA) is a key tool in the field of data dimensionality reduction that is useful for various data science problems. However, many applications involve heterogeneous data that varies in quality due to noise characteristics associated with different sources of the data. Methods that deal with this mixed dataset are known as heteroscedastic methods. Current methods like HePPCAT make Gaussian assumptions of the basis coefficients that may not hold in practice. Other methods such as Weighted PCA (WPCA) assume the noise variances are known, which may be difficult to know in practice. This paper develops a PCA method that can estimate the sample-wise noise variances and use this information in the model to improve the estimate of the subspace basis associated with the low-rank structure of the data. This is done without distributional assumptions of the low-rank component and without assuming the noise variances are known. Simulations show the effectiveness of accounting for such heteroscedasticity in the data, the benefits of using such a method with all of the data versus retaining only good data, and comparisons are made against other PCA methods established in the literature like PCA, Robust PCA (RPCA), and HePPCAT. Code available at https://github.com/javiersc1/ALPCAH

* This article has been accepted for publication in the Fourteenth International Conference on Sampling Theory and Applications, accessible via IEEE XPlore 
Viaarxiv icon

AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval

May 12, 2023
Zongyu Li, Jason Hu, Xiaojian Xu, Liyue Shen, Jeffrey A. Fessler

Figure 1 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval
Figure 2 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval
Figure 3 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval
Figure 4 for AWFSD: Accelerated Wirtinger Flow with Score-based Diffusion Image Prior for Poisson-Gaussian Holographic Phase Retrieval

Phase retrieval (PR) is an essential problem in a number of coherent imaging systems. This work aims at resolving the holographic phase retrieval problem in real world scenarios where the measurements are corrupted by a mixture of Poisson and Gaussian (PG) noise that stems from optical imaging systems. To solve this problem, we develop a novel algorithm based on Accelerated Wirtinger Flow that uses Score-based Diffusion models as the generative prior (AWFSD). In particular, we frame the PR problem as an optimization task that involves both a data fidelity term and a regularization term. We derive the gradient of the PG log-likelihood function along with its corresponding Lipschitz constant, ensuring a more accurate data consistency term for practical measurements. We introduce a generative prior as part of our regularization approach by using a score-based diffusion model to capture (the gradient of) the image prior distribution. We provide theoretical analysis that establishes a critical-point convergence guarantee for the proposed AWFSD algorithm. Our simulation experiments demonstrate that: 1) The proposed algorithm based on the PG likelihood model enhances reconstruction compared to that solely based on either Gaussian or Poisson likelihood. 2) The proposed AWFSD algorithm produces reconstructions with higher image quality both qualitatively and quantitatively, and is more robust to variations in noise levels when compared with state-of-the-art methods for phase retrieval.

Viaarxiv icon

Dynamic Subspace Estimation with Grassmannian Geodesics

Mar 26, 2023
Cameron J. Blocker, Haroon Raja, Jeffrey A. Fessler, Laura Balzano

Figure 1 for Dynamic Subspace Estimation with Grassmannian Geodesics
Figure 2 for Dynamic Subspace Estimation with Grassmannian Geodesics
Figure 3 for Dynamic Subspace Estimation with Grassmannian Geodesics
Figure 4 for Dynamic Subspace Estimation with Grassmannian Geodesics

Dynamic subspace estimation, or subspace tracking, is a fundamental problem in statistical signal processing and machine learning. This paper considers a geodesic model for time-varying subspaces. The natural objective function for this model is non-convex. We propose a novel algorithm for minimizing this objective and estimating the parameters of the model from data with Grassmannian-constrained optimization. We show that with this algorithm, the objective is monotonically non-increasing. We demonstrate the performance of this model and our algorithm on synthetic data, video data, and dynamic fMRI data.

Viaarxiv icon

A Complex Quasi-Newton Proximal Method for Image Reconstruction in Compressed Sensing MRI

Mar 05, 2023
Tao Hong, Luis Hernandez-Garcia, Jeffrey A. Fessler

Figure 1 for A Complex Quasi-Newton Proximal Method for Image Reconstruction in Compressed Sensing MRI
Figure 2 for A Complex Quasi-Newton Proximal Method for Image Reconstruction in Compressed Sensing MRI
Figure 3 for A Complex Quasi-Newton Proximal Method for Image Reconstruction in Compressed Sensing MRI
Figure 4 for A Complex Quasi-Newton Proximal Method for Image Reconstruction in Compressed Sensing MRI

Model-based methods are widely used for reconstruction in compressed sensing (CS) magnetic resonance imaging (MRI), using priors to describe the images of interest. The reconstruction process is equivalent to solving a composite optimization problem. Accelerated proximal methods (APMs) are very popular approaches for such problems. This paper proposes a complex quasi-Newton proximal method (CQNPM) for the wavelet and total variation based CS MRI reconstruction. Compared with APMs, CQNPM requires fewer iterations to converge but needs to compute a more challenging proximal mapping called weighted proximal mapping (WPM). To make CQNPM more practical, we propose efficient methods to solve the related WPM. Numerical experiments demonstrate the effectiveness and efficiency of CQNPM.

* 11 pages, 10 figures 
Viaarxiv icon

Adaptive Sampling for Linear Sensing Systems via Langevin Dynamics

Feb 27, 2023
Guanhua Wang, Douglas C. Noll, Jeffrey A. Fessler

Figure 1 for Adaptive Sampling for Linear Sensing Systems via Langevin Dynamics
Figure 2 for Adaptive Sampling for Linear Sensing Systems via Langevin Dynamics
Figure 3 for Adaptive Sampling for Linear Sensing Systems via Langevin Dynamics
Figure 4 for Adaptive Sampling for Linear Sensing Systems via Langevin Dynamics

Adaptive or dynamic signal sampling in sensing systems can adapt subsequent sampling strategies based on acquired signals, thereby potentially improving image quality and speed. This paper proposes a Bayesian method for adaptive sampling based on greedy variance reduction and stochastic gradient Langevin dynamics (SGLD). The image priors involved can be either analytical or neural network-based. Notably, the learned image priors generalize well to out-of-distribution test cases that have different statistics than the training dataset. As a real-world validation, the method is applied to accelerate the acquisition of magnetic resonance imaging (MRI). Compared to non-adaptive sampling, the proposed method effectively improved the image quality by 2-3 dB in PSNR, and improved the restoration of subtle details.

* 5 pages, 4 figures 
Viaarxiv icon

HeMPPCAT: Mixtures of Probabilistic Principal Component Analysers for Data with Heteroscedastic Noise

Jan 25, 2023
Alec S. Xu, Laura Balzano, Jeffrey A. Fessler

Figure 1 for HeMPPCAT: Mixtures of Probabilistic Principal Component Analysers for Data with Heteroscedastic Noise
Figure 2 for HeMPPCAT: Mixtures of Probabilistic Principal Component Analysers for Data with Heteroscedastic Noise

Mixtures of probabilistic principal component analysis (MPPCA) is a well-known mixture model extension of principal component analysis (PCA). Similar to PCA, MPPCA assumes the data samples in each mixture contain homoscedastic noise. However, datasets with heterogeneous noise across samples are becoming increasingly common, as larger datasets are generated by collecting samples from several sources with varying noise profiles. The performance of MPPCA is suboptimal for data with heteroscedastic noise across samples. This paper proposes a heteroscedastic mixtures of probabilistic PCA technique (HeMPPCAT) that uses a generalized expectation-maximization (GEM) algorithm to jointly estimate the unknown underlying factors, means, and noise variances under a heteroscedastic noise setting. Simulation results illustrate the improved factor estimates and clustering accuracies of HeMPPCAT compared to MPPCA.

Viaarxiv icon

Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction

Jan 23, 2023
Zongyu Li, Yuni K. Dewaraja, Jeffrey A. Fessler

Figure 1 for Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction
Figure 2 for Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction
Figure 3 for Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction
Figure 4 for Training End-to-End Unrolled Iterative Neural Networks for SPECT Image Reconstruction

Training end-to-end unrolled iterative neural networks for SPECT image reconstruction requires a memory-efficient forward-backward projector for efficient backpropagation. This paper describes an open-source, high performance Julia implementation of a SPECT forward-backward projector that supports memory-efficient backpropagation with an exact adjoint. Our Julia projector uses only ~5% of the memory of an existing Matlab-based projector. We compare unrolling a CNN-regularized expectation-maximization (EM) algorithm with end-to-end training using our Julia projector with other training methods such as gradient truncation (ignoring gradients involving the projector) and sequential training, using XCAT phantoms and virtual patient (VP) phantoms generated from SIMIND Monte Carlo (MC) simulations. Simulation results with two different radionuclides (90Y and 177Lu) show that: 1) For 177Lu XCAT phantoms and 90Y VP phantoms, training unrolled EM algorithm in end-to-end fashion with our Julia projector yields the best reconstruction quality compared to other training methods and OSEM, both qualitatively and quantitatively. For VP phantoms with 177Lu radionuclide, the reconstructed images using end-to-end training are in higher quality than using sequential training and OSEM, but are comparable with using gradient truncation. We also find there exists a trade-off between computational cost and reconstruction accuracy for different training methods. End-to-end training has the highest accuracy because the correct gradient is used in backpropagation; sequential training yields worse reconstruction accuracy, but is significantly faster and uses much less memory.

* submitted to IEEE TRPMS 
Viaarxiv icon

Stochastic Optimization of 3D Non-Cartesian Sampling Trajectory (SNOPY)

Sep 22, 2022
Guanhua Wang, Jon-Fredrik Nielsen, Jeffrey A. Fessler, Douglas C. Noll

Figure 1 for Stochastic Optimization of 3D Non-Cartesian Sampling Trajectory (SNOPY)
Figure 2 for Stochastic Optimization of 3D Non-Cartesian Sampling Trajectory (SNOPY)
Figure 3 for Stochastic Optimization of 3D Non-Cartesian Sampling Trajectory (SNOPY)
Figure 4 for Stochastic Optimization of 3D Non-Cartesian Sampling Trajectory (SNOPY)

Optimizing 3D k-space sampling trajectories for efficient MRI is important yet challenging. This work proposes a generalized framework for optimizing 3D non-Cartesian sampling patterns via data-driven optimization. We built a differentiable MRI system model to enable gradient-based methods for sampling trajectory optimization. By combining training losses, the algorithm can simultaneously optimize multiple properties of sampling patterns, including image quality, hardware constraints (maximum slew rate and gradient strength), reduced peripheral nerve stimulation (PNS), and parameter-weighted contrast. The proposed method can either optimize the gradient waveform (spline-based freeform optimization) or optimize properties of given sampling trajectories (such as the rotation angle of radial trajectories). Notably, the method optimizes sampling trajectories synergistically with either model-based or learning-based reconstruction methods. We proposed several strategies to alleviate the severe non-convexity and huge computation demand posed by the high-dimensional optimization. The corresponding code is organized as an open-source, easy-to-use toolbox. We applied the optimized trajectory to multiple applications including structural and functional imaging. In the simulation studies, the reconstruction PSNR of a 3D kooshball trajectory was increased by 4 dB with SNOPY optimization. In the prospective studies, by optimizing the rotation angles of a stack-of-stars (SOS) trajectory, SNOPY improved the PSNR by 1.4dB compared to the best empirical method. Optimizing the gradient waveform of a rotational EPI trajectory improved subjects' rating of the PNS effect from 'strong' to 'mild.' In short, SNOPY provides an efficient data-driven and optimization-based method to tailor non-Cartesian sampling trajectories.

* 13 pages, 8 figures 
Viaarxiv icon

Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised and Adversarial Learning from Scarce Training Data

Jan 23, 2022
Anish Lahiri, Marc Klasky, Jeffrey A. Fessler, Saiprasad Ravishankar

Figure 1 for Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised and Adversarial Learning from Scarce Training Data
Figure 2 for Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised and Adversarial Learning from Scarce Training Data
Figure 3 for Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised and Adversarial Learning from Scarce Training Data
Figure 4 for Sparse-view Cone Beam CT Reconstruction using Data-consistent Supervised and Adversarial Learning from Scarce Training Data

Reconstruction of CT images from a limited set of projections through an object is important in several applications ranging from medical imaging to industrial settings. As the number of available projections decreases, traditional reconstruction techniques such as the FDK algorithm and model-based iterative reconstruction methods perform poorly. Recently, data-driven methods such as deep learning-based reconstruction have garnered a lot of attention in applications because they yield better performance when enough training data is available. However, even these methods have their limitations when there is a scarcity of available training data. This work focuses on image reconstruction in such settings, i.e., when both the number of available CT projections and the training data is extremely limited. We adopt a sequential reconstruction approach over several stages using an adversarially trained shallow network for 'destreaking' followed by a data-consistency update in each stage. To deal with the challenge of limited data, we use image subvolumes to train our method, and patch aggregation during testing. To deal with the computational challenge of learning on 3D datasets for 3D reconstruction, we use a hybrid 3D-to-2D mapping network for the 'destreaking' part. Comparisons to other methods over several test examples indicate that the proposed method has much potential, when both the number of projections and available training data are highly limited.

Viaarxiv icon

Efficient approximation of Jacobian matrices involving a non-uniform fast Fourier transform (NUFFT)

Nov 04, 2021
Guanhua Wang, Jeffrey A. Fessler

Figure 1 for Efficient approximation of Jacobian matrices involving a non-uniform fast Fourier transform (NUFFT)
Figure 2 for Efficient approximation of Jacobian matrices involving a non-uniform fast Fourier transform (NUFFT)
Figure 3 for Efficient approximation of Jacobian matrices involving a non-uniform fast Fourier transform (NUFFT)
Figure 4 for Efficient approximation of Jacobian matrices involving a non-uniform fast Fourier transform (NUFFT)

There is growing interest in learning k-space sampling patterns for MRI using optimization approaches. For non-Cartesian sampling patterns, reconstruction methods typically involve non-uniform FFT (NUFFT) operations. A typical NUFFT method contains frequency domain interpolation using Kaiser-Bessel kernel values that are retrieved by nearest neighbor look-up in a finely tabulated kernel. That look-up operation is not differentiable with respect to the sampling pattern, complicating auto-differentiation routines for backpropagation (stochastic gradient descent) for sampling pattern optimization. This paper describes an efficient and accurate approach for computing approximate gradients with respect to the sampling pattern for learning k-space sampling. Various numerical experiments validate the accuracy of the proposed approximation. We also showcase the trajectories optimized for different iterative reconstruction algorithms, including smooth convex regularized reconstruction and compressed sensing-based reconstruction.

* 9 pages, 4 figures 
Viaarxiv icon