Alert button
Picture for Laura Waller

Laura Waller

Alert button

Roadmap on Deep Learning for Microscopy

Mar 07, 2023
Giovanni Volpe, Carolina Wählby, Lei Tian, Michael Hecht, Artur Yakimovich, Kristina Monakhova, Laura Waller, Ivo F. Sbalzarini, Christopher A. Metzler, Mingyang Xie, Kevin Zhang, Isaac C. D. Lenton, Halina Rubinsztein-Dunlop, Daniel Brunner, Bijie Bai, Aydogan Ozcan, Daniel Midtvedt, Hao Wang, Nataša Sladoje, Joakim Lindblad, Jason T. Smith, Marien Ochoa, Margarida Barroso, Xavier Intes, Tong Qiu, Li-Yu Yu, Sixian You, Yongtao Liu, Maxim A. Ziatdinov, Sergei V. Kalinin, Arlo Sheridan, Uri Manor, Elias Nehme, Ofri Goldenberg, Yoav Shechtman, Henrik K. Moberg, Christoph Langhammer, Barbora Špačková, Saga Helgadottir, Benjamin Midtvedt, Aykut Argun, Tobias Thalheim, Frank Cichos, Stefano Bo, Lars Hubatsch, Jesus Pineda, Carlo Manzo, Harshith Bachimanchi, Erik Selander, Antoni Homs-Corbera, Martin Fränzl, Kevin de Haan, Yair Rivenson, Zofia Korczak, Caroline Beck Adiels, Mite Mijalkov, Dániel Veréb, Yu-Wei Chang, Joana B. Pereira, Damian Matuszewski, Gustaf Kylberg, Ida-Maria Sintorn, Juan C. Caicedo, Beth A Cimini, Muyinatu A. Lediju Bell, Bruno M. Saraiva, Guillaume Jacquemet, Ricardo Henriques, Wei Ouyang, Trang Le, Estibaliz Gómez-de-Mariscal, Daniel Sage, Arrate Muñoz-Barrutia, Ebba Josefson Lindqvist, Johanna Bergman

Figure 1 for Roadmap on Deep Learning for Microscopy
Figure 2 for Roadmap on Deep Learning for Microscopy
Figure 3 for Roadmap on Deep Learning for Microscopy
Figure 4 for Roadmap on Deep Learning for Microscopy

Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.

Viaarxiv icon

BiPMAP: A Toolbox for Predictions of Perceived Motion Artifacts on Modern Displays

Dec 07, 2022
Guanghan Meng, Dekel Galor, Laura Waller, Martin S. Banks

Presenting dynamic scenes without incurring motion artifacts visible to observers requires sustained effort from the display industry. A tool that predicts motion artifacts and simulates artifact elimination through optimizing the display configuration is highly desired to guide the design and manufacture of modern displays. Despite the popular demands, there is no such tool available in the market. In this study, we deliver an interactive toolkit, Binocular Perceived Motion Artifact Predictor (BiPMAP), as an executable file with GPU acceleration. BiPMAP accounts for an extensive collection of user-defined parameters and directly visualizes a variety of motion artifacts by presenting the perceived continuous and sampled moving stimuli side-by-side. For accurate artifact predictions, BiPMAP utilizes a novel model of the human contrast sensitivity function to effectively imitate the frequency modulation of the human visual system. In addition, BiPMAP is capable of deriving various in-plane motion artifacts for 2D displays and depth distortion in 3D stereoscopic displays.

* 11 pages, 9 figures 
Viaarxiv icon

Linear Revolution-Invariance: Modeling and Deblurring Spatially-Varying Imaging Systems

Jun 17, 2022
Amit Kohli, Anastasios Angelopoulos, Sixian You, Kyrollos Yanny, Laura Waller

Figure 1 for Linear Revolution-Invariance: Modeling and Deblurring Spatially-Varying Imaging Systems
Figure 2 for Linear Revolution-Invariance: Modeling and Deblurring Spatially-Varying Imaging Systems
Figure 3 for Linear Revolution-Invariance: Modeling and Deblurring Spatially-Varying Imaging Systems
Figure 4 for Linear Revolution-Invariance: Modeling and Deblurring Spatially-Varying Imaging Systems

We develop theory and algorithms for modeling and deblurring imaging systems that are composed of rotationally-symmetric optics. Such systems have point spread functions (PSFs) which are spatially-varying, but only vary radially, a property we call linear revolution-invariance (LRI). From the LRI property we develop an exact theory for linear imaging with radially-varying optics, including an analog of the Fourier Convolution Theorem. This theory, in tandem with a calibration procedure using Seidel aberration coefficients, yields an efficient forward model and deblurring algorithm which requires only a single calibration image -- one that is easier to measure than a single PSF. We test these methods in simulation and experimentally on images of resolution targets, rabbit liver tissue, and live tardigrades obtained using the UCLA Miniscope v3. We find that the LRI forward model generates accurate radially-varying blur, and LRI deblurring improves resolution, especially near the edges of the field-of-view. These methods are available for use as a Python package at https://github.com/apsk14/lri.

* 17 pages, 9 figures 
Viaarxiv icon

Dynamic Structured Illumination Microscopy with a Neural Space-time Model

Jun 03, 2022
Ruiming Cao, Fanglin Linda Liu, Li-Hao Yeh, Laura Waller

Figure 1 for Dynamic Structured Illumination Microscopy with a Neural Space-time Model
Figure 2 for Dynamic Structured Illumination Microscopy with a Neural Space-time Model
Figure 3 for Dynamic Structured Illumination Microscopy with a Neural Space-time Model
Figure 4 for Dynamic Structured Illumination Microscopy with a Neural Space-time Model

Structured illumination microscopy (SIM) reconstructs a super-resolved image from multiple raw images; hence, acquisition speed is limited, making it unsuitable for dynamic scenes. We propose a new method, Speckle Flow SIM, that models sample motion during the data capture in order to reconstruct dynamic scenes with super-resolution. Speckle Flow SIM uses fixed speckle illumination and relies on sample motion to capture a sequence of raw images. Then, the spatio-temporal relationship of the dynamic scene is modeled using a neural space-time model with coordinate-based multi-layer perceptrons (MLPs), and the motion dynamics and the super-resolved scene are jointly recovered. We validated Speckle Flow SIM in simulation and built a simple, inexpensive experimental setup with off-the-shelf components. We demonstrated that Speckle Flow SIM can reconstruct a dynamic scene with deformable motion and 1.88x the diffraction-limited resolution in experiment.

Viaarxiv icon

Dancing under the stars: video denoising in starlight

Apr 08, 2022
Kristina Monakhova, Stephan R. Richter, Laura Waller, Vladlen Koltun

Figure 1 for Dancing under the stars: video denoising in starlight
Figure 2 for Dancing under the stars: video denoising in starlight
Figure 3 for Dancing under the stars: video denoising in starlight
Figure 4 for Dancing under the stars: video denoising in starlight

Imaging in low light is extremely challenging due to low photon counts. Using sensitive CMOS cameras, it is currently possible to take videos at night under moonlight (0.05-0.3 lux illumination). In this paper, we demonstrate photorealistic video under starlight (no moon present, $<$0.001 lux) for the first time. To enable this, we develop a GAN-tuned physics-based noise model to more accurately represent camera noise at the lowest light levels. Using this noise model, we train a video denoiser using a combination of simulated noisy video clips and real noisy still images. We capture a 5-10 fps video dataset with significant motion at approximately 0.6-0.7 millilux with no active illumination. Comparing against alternative methods, we achieve improved video quality at the lowest light levels, demonstrating photorealistic video denoising in starlight for the first time.

* CVPR 2022. Project page: https://kristinamonakhova.com/starlight_denoising/ 
Viaarxiv icon

Sparse deep computer-generated holography for optical microscopy

Dec 12, 2021
Alex Liu, Yi Xue, Laura Waller

Figure 1 for Sparse deep computer-generated holography for optical microscopy
Figure 2 for Sparse deep computer-generated holography for optical microscopy
Figure 3 for Sparse deep computer-generated holography for optical microscopy
Figure 4 for Sparse deep computer-generated holography for optical microscopy

Computer-generated holography (CGH) has broad applications such as direct-view display, virtual and augmented reality, as well as optical microscopy. CGH usually utilizes a spatial light modulator that displays a computer-generated phase mask, modulating the phase of coherent light in order to generate customized patterns. The algorithm that computes the phase mask is the core of CGH and is usually tailored to meet different applications. CGH for optical microscopy usually requires 3D accessibility (i.e., generating overlapping patterns along the $z$-axis) and micron-scale spatial precision. Here, we propose a CGH algorithm using an unsupervised generative model designed for optical microscopy to synthesize 3D selected illumination. The algorithm, named sparse deep CGH, is able to generate sparsely distributed points in a large 3D volume with higher contrast than conventional CGH algorithms.

* 5 pages, 4 figures, to be presented at NeurIPS 2021 Deep Learning and Inverse Problems workshop 
Viaarxiv icon

Distributed Reconstruction Algorithm for Electron Tomography with Multiple-scattering Samples

Oct 15, 2021
David Ren, Michael Whittaker, Colin Ophus, Laura Waller

Figure 1 for Distributed Reconstruction Algorithm for Electron Tomography with Multiple-scattering Samples
Figure 2 for Distributed Reconstruction Algorithm for Electron Tomography with Multiple-scattering Samples
Figure 3 for Distributed Reconstruction Algorithm for Electron Tomography with Multiple-scattering Samples
Figure 4 for Distributed Reconstruction Algorithm for Electron Tomography with Multiple-scattering Samples

Three-dimensional electron tomography is used to understand the structure and properties of samples in chemistry, materials science, geoscience, and biology. With the recent development of high-resolution detectors and algorithms that can account for multiple-scattering events, thicker samples can be examined at finer resolution, resulting in larger reconstruction volumes than previously possible. In this work, we propose a distributed computing framework that reconstructs large volumes by decomposing a projected tilt-series into smaller datasets such that sub-volumes can be simultaneously reconstructed on separate compute nodes using a cluster. We demonstrate our method by reconstructing a multiple-scattering layered clay (montmorillonite) sample at high resolution from a large field-of-view tilt-series phase contrast transmission electron microscopty dataset.

Viaarxiv icon

Exceeding the limits of algorithmic self-calibration in super-resolution imaging

Sep 15, 2021
Eric Li, Stuart Sherwin, Gautam Gunjala, Laura Waller

Figure 1 for Exceeding the limits of algorithmic self-calibration in super-resolution imaging
Figure 2 for Exceeding the limits of algorithmic self-calibration in super-resolution imaging
Figure 3 for Exceeding the limits of algorithmic self-calibration in super-resolution imaging
Figure 4 for Exceeding the limits of algorithmic self-calibration in super-resolution imaging

Fourier ptychographic microscopy is a computational imaging technique that provides quantitative phase information and high resolution over a large field-of-view. Although the technique presents numerous advantages over conventional microscopy, model mismatch due to unknown optical aberrations can significantly limit reconstruction quality. Many attempts to address this issue rely on embedding pupil recovery into the reconstruction algorithm. In this paper we demonstrate the limitations of a purely algorithmic approach and evaluate the merits of implementing a simple, dedicated calibration procedure. In simulations, we find that for a target sample reconstruction error, we can image without any aberration corrections up to a maximum aberration magnitude of $\lambda$/40. When we use algorithmic self-calibration, we can increase the aberration magnitude up to $\lambda$/10, and with our in situ speckle calibration technique, this working range is extended further to a maximum aberration magnitude of $\lambda$/3. Hence, one can trade-off complexity for accuracy by using a separate calibration process, which is particularly useful for larger aberrations.

Viaarxiv icon