Get our free extension to see links to code for papers anywhere online!Free add-on: code for papers everywhere!Free add-on: See code for papers anywhere!

School of ECE and BME of Purdue University

Abstract:X-ray computed tomography (CT) reconstructs the internal morphology of a three dimensional object from a collection of projection images, most commonly using a single rotation axis. However, for objects containing dense materials like metal, the use of a single rotation axis may leave some regions of the object obscured by the metal, even though projections from other rotation axes (or poses) might contain complementary information that would better resolve these obscured regions. In this paper, we propose pixel-weighted Multi-pose Fusion to reduce metal artifacts by fusing the information from complementary measurement poses into a single reconstruction. Our method uses Multi-Agent Consensus Equilibrium (MACE), an extension of Plug-and-Play, as a framework for integrating projection data from different poses. A primary novelty of the proposed method is that the output of different MACE agents are fused in a pixel-weighted manner to minimize the effects of metal throughout the reconstruction. Using real CT data on an object with and without metal inserts, we demonstrate that the proposed pixel-weighted Multi-pose Fusion method significantly reduces metal artifacts relative to single-pose reconstructions.

Via

Abstract:Coherent LIDAR uses a chirped laser pulse for 3D imaging of distant targets. However, existing coherent LIDAR image reconstruction methods do not account for the system's aperture, resulting in sub-optimal resolution. Moreover, these methods use majorization-minimization for computational efficiency, but do so without a theoretical treatment of convergence. In this paper, we present Coherent LIDAR Aperture Modeled Plug-and-Play (CLAMP) for multi-look coherent LIDAR image reconstruction. CLAMP uses multi-agent consensus equilibrium (a form of PnP) to combine a neural network denoiser with an accurate physics-based forward model. CLAMP introduces an FFT-based method to account for the effects of the aperture and uses majorization of the forward model for computational efficiency. We also formalize the use of majorization-minimization in consensus optimization problems and prove convergence to the exact consensus equilibrium solution. Finally, we apply CLAMP to synthetic and measured data to demonstrate its effectiveness in producing high-resolution, speckle-free, 3D imagery.

Via

Abstract:X-ray computed tomography (CT) based on photon counting detectors (PCD) extends standard CT by counting detected photons in multiple energy bins. PCD data can be used to increase the contrast-to-noise ratio (CNR), increase spatial resolution, reduce radiation dose, reduce injected contrast dose, and compute a material decomposition using a specified set of basis materials. Current commercial and prototype clinical photon counting CT systems utilize PCD-CT reconstruction methods that either reconstruct from each spectral bin separately, or first create an estimate of a material sinogram using a specified set of basis materials and then reconstruct from these material sinograms. However, existing methods are not able to utilize simultaneously and in a modular fashion both the measured spectral information and advanced prior models in order to produce a material decomposition. We describe an efficient, modular framework for PCD-based CT reconstruction and material decomposition using on Multi-Agent Consensus Equilibrium (MACE). Our method employs a detector proximal map or agent that uses PCD measurements to update an estimate of the pathlength sinogram. We also create a prior agent in the form of a sinogram denoiser that enforces both physical and empirical knowledge about the material-decomposed sinogram. The sinogram reconstruction is computed using the MACE algorithm, which finds an equilibrium solution between the two agents, and the final image is reconstructed from the estimated sinogram. Importantly, the modularity of our method allows the two agents to be designed, implemented, and optimized independently. Our results on simulated data show a substantial (450%) CNR boost vs conventional maximum likelihood reconstruction when applied to a phantom used to evaluate low contrast detectability.

Via

Abstract:Deep neural networks (DNN) are commonly used to denoise and sharpen X-ray computed tomography (CT) images with the goal of reducing patient X-ray dosage while maintaining reconstruction quality. However, naive application of DNN-based methods can result in image texture that is undesirable in clinical applications. Alternatively, generative adversarial network (GAN) based methods can produce appropriate texture, but naive application of GANs can introduce inaccurate or even unreal image detail. In this paper, we propose a texture matching generative adversarial network (TMGAN) that enhances CT images while generating an image texture that can be matched to a target texture. We use parallel generators to separate anatomical features from the generated texture, which allows the GAN to be trained to match the desired texture without directly affecting the underlying CT image. We demonstrate that TMGAN generates enhanced image quality while also producing image texture that is desirable for clinical application.

Via

Abstract:Low x-ray dose is desirable in x-ray computed tomographic (CT) imaging due to health concerns. But low dose comes with a cost of low signal artifacts such as streaks and low frequency bias in the reconstruction. As a result, low signal correction is needed to help reduce artifacts while retaining relevant anatomical structures. Low signal can be encountered in cases where sufficient number of photons do not reach the detector to have confidence in the recorded data. % NOTE: SNR is ratio of powers, not std. dev. X-ray photons, assumed to have Poisson distribution, have signal to noise ratio proportional to the dose, with poorer SNR in low signal areas. Electronic noise added by the data acquisition system further reduces the signal quality. In this paper we will demonstrate a technique to combat low signal artifacts through adaptive filtration. It entails statistics-based filtering on the uncorrected data, correcting the lower signal areas more aggressively than the high signal ones. We look at local averages to decide how aggressive the filtering should be, and local standard deviation to decide how much detail preservation to apply. Implementation consists of a pre-correction step i.e. local linear minimum mean-squared error correction, followed by a variance stabilizing transform, and finally adaptive bilateral filtering. The coefficients of the bilateral filter are computed using local statistics. Results show that improvements were made in terms of low frequency bias, streaks, local average and standard deviation, modulation transfer function and noise power spectrum.

Via

Authors:Obaidullah Rahman, Madhuri Nagare, Ken D. Sauer, Charles A. Bouman, Roman Melnyk, Brian Nett, Jie Tang

Abstract:In computed tomographic imaging, model based iterative reconstruction methods have generally shown better image quality than the more traditional, faster filtered backprojection technique. The cost we have to pay is that MBIR is computationally expensive. In this work we train a 2.5D deep learning (DL) network to mimic MBIR quality image. The network is realized by a modified Unet, and trained using clinical FBP and MBIR image pairs. We achieve the quality of MBIR images faster and with a much smaller computation cost. Visually and in terms of noise power spectrum (NPS), DL-MBIR images have texture similar to that of MBIR, with reduced noise power. Image profile plots, NPS plots, standard deviation, etc. suggest that the DL-MBIR images result from a successful emulation of an MBIR operator.

Via

Authors:Obaidullah Rahman, Ken D. Sauer, Madhuri Nagare, Charles A. Bouman, Roman Melnyk, Jie Tang, Brian Nett

Abstract:Deep learning (DL) shows promise of advantages over conventional signal processing techniques in a variety of imaging applications. The networks' being trained from examples of data rather than explicitly designed allows them to learn signal and noise characteristics to most effectively construct a mapping from corrupted data to higher quality representations. In inverse problems, one has options of applying DL in the domain of the originally captured data, in the transformed domain of the desired final representation, or both. X-ray computed tomography (CT), one of the most valuable tools in medical diagnostics, is already being improved by DL methods. Whether for removal of common quantum noise resulting from the Poisson-distributed photon counts, or for reduction of the ill effects of metal implants on image quality, researchers have begun employing DL widely in CT. The selection of training data is driven quite directly by the corruption on which the focus lies. However, the way in which differences between the target signal and measured data is penalized in training generally follows conventional, pointwise loss functions. This work introduces a creative technique for favoring reconstruction characteristics that are not well described by norms such as mean-squared or mean-absolute error. Particularly in a field such as X-ray CT, where radiologists' subjective preferences in image characteristics are key to acceptance, it may be desirable to penalize differences in DL more creatively. This penalty may be applied in the data domain, here the CT sinogram, or in the reconstructed image. We design loss functions for both shaping and selectively preserving frequency content of the signal.

Via

Figures and Tables:

Abstract:Over the past decade, Plug-and-Play (PnP) has become a popular method for reconstructing images using a modular framework consisting of a forward and prior model. The great strength of PnP is that an image denoiser can be used as a prior model while the forward model can be implemented using more traditional physics-based approaches. However, a limitation of PnP is that it reconstructs only a single deterministic image. In this paper, we introduce Generative Plug-and-Play (GPnP), a generalization of PnP to sample from the posterior distribution. As with PnP, GPnP has a modular framework using a physics-based forward model and an image denoising prior model. However, in GPnP these models are extended to become proximal generators, which sample from associated distributions. GPnP applies these proximal generators in alternation to produce samples from the posterior. We present experimental simulations using the well-known BM3D denoiser. Our results demonstrate that the GPnP method is robust, easy to implement, and produces intuitively reasonable samples from the posterior for sparse interpolation and tomographic reconstruction. Code to accompany this paper is available at https://github.com/gbuzzard/generative-pnp-allerton .

Via

Authors:Ali G. Sheikh, Casey J. Pellizzari, Sherman J. Kisner, Gregery T. Buzzard, Charles A. Bouman

Figures and Tables:

Abstract:Directed energy applications require the estimation of digital-holographic (DH) phase errors due to atmospheric turbulence in order to accurately focus the outgoing beam. These phase error estimates must be computed with very low latency to keep pace with changing atmospheric parameters, which requires that phase errors be estimated in a single shot of DH data. The digital holography model-based iterative reconstruction (DH-MBIR) algorithm is capable of accurately estimating phase errors in a single shot using the expectation maximization (EM) algorithm. However, existing implementations of DH-MBIR require hundreds of iterations, which is not practical for real-time applications. In this paper, we present the Dynamic DH-MBIR (DDH-MBIR) algorithm for estimating isoplanatic phase errors from streaming single-shot data with extremely low latency. The Dynamic DH-MBIR algorithm reduces the computation and latency by orders of magnitude relative to conventional DH-MBIR, making real-time throughput and latency feasible in applications. Using simulated data that models frozen flow of atmospheric turbulence, we show that our algorithm can achieve a consistently high Strehl ratio with realistic simulation parameters using only 1 iteration per timestep.

Via

Figures and Tables:

Abstract:Multi-Agent Consensus Equilibrium (MACE) formulates an inverse imaging problem as a balance among multiple update agents such as data-fitting terms and denoisers. However, each such agent operates on a separate copy of the full image, leading to redundant memory use and slow convergence when each agent affects only a small subset of the full image. In this paper, we extend MACE to Projected Multi-Agent Consensus Equilibrium (PMACE), in which each agent updates only a projected component of the full image, thus greatly reducing memory use for some applications.We describe PMACE in terms of an equilibrium problem and an equivalent fixed point problem and show that in most cases the PMACE equilibrium is not the solution of an optimization problem. To demonstrate the value of PMACE, we apply it to the problem of ptychography, in which a sample is reconstructed from the diffraction patterns resulting from coherent X-ray illumination at multiple overlapping spots. In our PMACE formulation, each spot corresponds to a separate data-fitting agent, with the final solution found as an equilibrium among all the agents. Our results demonstrate that the PMACE reconstruction algorithm generates more accurate reconstructions at a lower computational cost than existing ptychography algorithms when the spots are sparsely sampled.

Via