Alert button
Picture for Mingyang Xie

Mingyang Xie

Alert button

Snapshot High Dynamic Range Imaging with a Polarization Camera

Aug 16, 2023
Mingyang Xie, Matthew Chan, Christopher Metzler

Figure 1 for Snapshot High Dynamic Range Imaging with a Polarization Camera
Figure 2 for Snapshot High Dynamic Range Imaging with a Polarization Camera
Figure 3 for Snapshot High Dynamic Range Imaging with a Polarization Camera
Figure 4 for Snapshot High Dynamic Range Imaging with a Polarization Camera

High dynamic range (HDR) images are important for a range of tasks, from navigation to consumer photography. Accordingly, a host of specialized HDR sensors have been developed, the most successful of which are based on capturing variable per-pixel exposures. In essence, these methods capture an entire exposure bracket sequence at once in a single shot. This paper presents a straightforward but highly effective approach for turning an off-the-shelf polarization camera into a high-performance HDR camera. By placing a linear polarizer in front of the polarization camera, we are able to simultaneously capture four images with varied exposures, which are determined by the orientation of the polarizer. We develop an outlier-robust and self-calibrating algorithm to reconstruct an HDR image (at a single polarity) from these measurements. Finally, we demonstrate the efficacy of our approach with extensive real-world experiments.

* 9 pages, 10 figures 
Viaarxiv icon

Roadmap on Deep Learning for Microscopy

Mar 07, 2023
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C. D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman

Figure 1 for Roadmap on Deep Learning for Microscopy
Figure 2 for Roadmap on Deep Learning for Microscopy
Figure 3 for Roadmap on Deep Learning for Microscopy
Figure 4 for Roadmap on Deep Learning for Microscopy

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Viaarxiv icon

MetaDIP: Accelerating Deep Image Prior with Meta Learning

Sep 18, 2022
Kevin Zhang, Mingyang Xie, Maharshi Gor, Yi-Ting Chen, Yvonne Zhou, Christopher A. Metzler

Figure 1 for MetaDIP: Accelerating Deep Image Prior with Meta Learning
Figure 2 for MetaDIP: Accelerating Deep Image Prior with Meta Learning
Figure 3 for MetaDIP: Accelerating Deep Image Prior with Meta Learning
Figure 4 for MetaDIP: Accelerating Deep Image Prior with Meta Learning

Deep image prior (DIP) is a recently proposed technique for solving imaging inverse problems by fitting the reconstructed images to the output of an untrained convolutional neural network. Unlike pretrained feedforward neural networks, the same DIP can generalize to arbitrary inverse problems, from denoising to phase retrieval, while offering competitive performance at each task. The central disadvantage of DIP is that, while feedforward neural networks can reconstruct an image in a single pass, DIP must gradually update its weights over hundreds to thousands of iterations, at a significant computational cost. In this work we use meta-learning to massively accelerate DIP-based reconstructions. By learning a proper initialization for the DIP weights, we demonstrate a 10x improvement in runtimes across a range of inverse imaging tasks. Moreover, we demonstrate that a network trained to quickly reconstruct faces also generalizes to reconstructing natural image patches.

Viaarxiv icon

TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence

Mar 13, 2022
Brandon Y. Feng, Mingyang Xie, Christopher A. Metzler

Figure 1 for TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence
Figure 2 for TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence
Figure 3 for TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence
Figure 4 for TurbuGAN: An Adversarial Learning Approach to Spatially-Varying Multiframe Blind Deconvolution with Applications to Imaging Through Turbulence

We present a self-supervised and self-calibrating multi-shot approach to imaging through atmospheric turbulence, called TurbuGAN. Our approach requires no paired training data, adapts itself to the distribution of the turbulence, leverages domain-specific data priors, outperforms existing approaches, and can generalize from tens to tens of thousands of measurements. We achieve such functionality through an adversarial sensing framework adapted from CryoGAN, which uses a discriminator network to match the distributions of captured and simulated measurements. Our framework builds on CryoGAN by (1) generalizing the forward measurement model to incorporate physically accurate and computationally efficient models for light propagation through anisoplanatic turbulence, (2) enabling adaptation to slightly misspecified forward models, and (3) leveraging domain-specific prior knowledge using pretrained generative networks, when available. We validate TurbuGAN in simulation using realistic models for atmospheric turbulence-induced distortion.

Viaarxiv icon

PROVES: Establishing Image Provenance using Semantic Signatures

Oct 21, 2021
Mingyang Xie, Manav Kulshrestha, Shaojie Wang, Jinghan Yang, Ayan Chakrabarti, Ning Zhang, Yevgeniy Vorobeychik

Figure 1 for PROVES: Establishing Image Provenance using Semantic Signatures
Figure 2 for PROVES: Establishing Image Provenance using Semantic Signatures
Figure 3 for PROVES: Establishing Image Provenance using Semantic Signatures
Figure 4 for PROVES: Establishing Image Provenance using Semantic Signatures

Modern AI tools, such as generative adversarial networks, have transformed our ability to create and modify visual data with photorealistic results. However, one of the deleterious side-effects of these advances is the emergence of nefarious uses in manipulating information in visual data, such as through the use of deep fakes. We propose a novel architecture for preserving the provenance of semantic information in images to make them less susceptible to deep fake attacks. Our architecture includes semantic signing and verification steps. We apply this architecture to verifying two types of semantic information: individual identities (faces) and whether the photo was taken indoors or outdoors. Verification accounts for a collection of common image transformation, such as translation, scaling, cropping, and small rotations, and rejects adversarial transformations, such as adversarially perturbed or, in the case of face verification, swapped faces. Experiments demonstrate that in the case of provenance of faces in an image, our approach is robust to black-box adversarial transformations (which are rejected) as well as benign transformations (which are accepted), with few false negatives and false positives. Background verification, on the other hand, is susceptible to black-box adversarial examples, but becomes significantly more robust after adversarial training.

Viaarxiv icon

CoIL: Coordinate-based Internal Learning for Imaging Inverse Problems

Feb 09, 2021
Yu Sun, Jiaming Liu, Mingyang Xie, Brendt Wohlberg, Ulugbek S. Kamilov

Figure 1 for CoIL: Coordinate-based Internal Learning for Imaging Inverse Problems
Figure 2 for CoIL: Coordinate-based Internal Learning for Imaging Inverse Problems
Figure 3 for CoIL: Coordinate-based Internal Learning for Imaging Inverse Problems
Figure 4 for CoIL: Coordinate-based Internal Learning for Imaging Inverse Problems

We propose Coordinate-based Internal Learning (CoIL) as a new deep-learning (DL) methodology for the continuous representation of measurements. Unlike traditional DL methods that learn a mapping from the measurements to the desired image, CoIL trains a multilayer perceptron (MLP) to encode the complete measurement field by mapping the coordinates of the measurements to their responses. CoIL is a self-supervised method that requires no training examples besides the measurements of the test object itself. Once the MLP is trained, CoIL generates new measurements that can be used within a majority of image reconstruction methods. We validate CoIL on sparse-view computed tomography using several widely-used reconstruction methods, including purely model-based methods and those based on DL. Our results demonstrate the ability of CoIL to consistently improve the performance of all the considered methods by providing high-fidelity measurement fields.

Viaarxiv icon

Joint Reconstruction and Calibration using Regularization by Denoising

Nov 26, 2020
Mingyang Xie, Yu Sun, Jiaming Liu, Brendt Wohlberg, Ulugbek S. Kamilov

Figure 1 for Joint Reconstruction and Calibration using Regularization by Denoising
Figure 2 for Joint Reconstruction and Calibration using Regularization by Denoising
Figure 3 for Joint Reconstruction and Calibration using Regularization by Denoising

Regularization by denoising (RED) is a broadly applicable framework for solving inverse problems by using priors specified as denoisers. While RED has been shown to provide state-of-the-art performance in a number of applications, existing RED algorithms require exact knowledge of the measurement operator characterizing the imaging system, limiting their applicability in problems where the measurement operator has parametric uncertainties. We propose a new method, called Calibrated RED (Cal-RED), that enables joint calibration of the measurement operator along with reconstruction of the unknown image. Cal-RED extends the traditional RED methodology to imaging problems that require the calibration of the measurement operator. We validate Cal-RED on the problem of image reconstruction in computerized tomography (CT) under perturbed projection angles. Our results corroborate the effectiveness of Cal-RED for joint calibration and reconstruction using pre-trained deep denoisers as image priors.

Viaarxiv icon