Through digital imaging, microscopy has evolved from primarily being a means for visual observation of life at the micro- and nano-scale, to a quantitative tool with ever-increasing resolution and throughput. Artificial intelligence, deep neural networks, and machine learning are all niche terms describing computational methods that have gained a pivotal role in microscopy-based research over the past decade. This Roadmap is written collectively by prominent researchers and encompasses selected aspects of how machine learning is applied to microscopy image data, with the aim of gaining scientific knowledge by improved image quality, automated detection, segmentation, classification and tracking of objects, and efficient merging of information from multiple imaging modalities. We aim to give the reader an overview of the key developments and an understanding of possibilities and limitations of machine learning for microscopy. It will be of interest to a wide cross-disciplinary audience in the physical sciences and life sciences.
Presenting dynamic scenes without incurring motion artifacts visible to observers requires sustained effort from the display industry. A tool that predicts motion artifacts and simulates artifact elimination through optimizing the display configuration is highly desired to guide the design and manufacture of modern displays. Despite the popular demands, there is no such tool available in the market. In this study, we deliver an interactive toolkit, Binocular Perceived Motion Artifact Predictor (BiPMAP), as an executable file with GPU acceleration. BiPMAP accounts for an extensive collection of user-defined parameters and directly visualizes a variety of motion artifacts by presenting the perceived continuous and sampled moving stimuli side-by-side. For accurate artifact predictions, BiPMAP utilizes a novel model of the human contrast sensitivity function to effectively imitate the frequency modulation of the human visual system. In addition, BiPMAP is capable of deriving various in-plane motion artifacts for 2D displays and depth distortion in 3D stereoscopic displays.
We develop theory and algorithms for modeling and deblurring imaging systems that are composed of rotationally-symmetric optics. Such systems have point spread functions (PSFs) which are spatially-varying, but only vary radially, a property we call linear revolution-invariance (LRI). From the LRI property we develop an exact theory for linear imaging with radially-varying optics, including an analog of the Fourier Convolution Theorem. This theory, in tandem with a calibration procedure using Seidel aberration coefficients, yields an efficient forward model and deblurring algorithm which requires only a single calibration image -- one that is easier to measure than a single PSF. We test these methods in simulation and experimentally on images of resolution targets, rabbit liver tissue, and live tardigrades obtained using the UCLA Miniscope v3. We find that the LRI forward model generates accurate radially-varying blur, and LRI deblurring improves resolution, especially near the edges of the field-of-view. These methods are available for use as a Python package at https://github.com/apsk14/lri.
Structured illumination microscopy (SIM) reconstructs a super-resolved image from multiple raw images; hence, acquisition speed is limited, making it unsuitable for dynamic scenes. We propose a new method, Speckle Flow SIM, that models sample motion during the data capture in order to reconstruct dynamic scenes with super-resolution. Speckle Flow SIM uses fixed speckle illumination and relies on sample motion to capture a sequence of raw images. Then, the spatio-temporal relationship of the dynamic scene is modeled using a neural space-time model with coordinate-based multi-layer perceptrons (MLPs), and the motion dynamics and the super-resolved scene are jointly recovered. We validated Speckle Flow SIM in simulation and built a simple, inexpensive experimental setup with off-the-shelf components. We demonstrated that Speckle Flow SIM can reconstruct a dynamic scene with deformable motion and 1.88x the diffraction-limited resolution in experiment.
Imaging in low light is extremely challenging due to low photon counts. Using sensitive CMOS cameras, it is currently possible to take videos at night under moonlight (0.05-0.3 lux illumination). In this paper, we demonstrate photorealistic video under starlight (no moon present, $<$0.001 lux) for the first time. To enable this, we develop a GAN-tuned physics-based noise model to more accurately represent camera noise at the lowest light levels. Using this noise model, we train a video denoiser using a combination of simulated noisy video clips and real noisy still images. We capture a 5-10 fps video dataset with significant motion at approximately 0.6-0.7 millilux with no active illumination. Comparing against alternative methods, we achieve improved video quality at the lowest light levels, demonstrating photorealistic video denoising in starlight for the first time.
Computer-generated holography (CGH) has broad applications such as direct-view display, virtual and augmented reality, as well as optical microscopy. CGH usually utilizes a spatial light modulator that displays a computer-generated phase mask, modulating the phase of coherent light in order to generate customized patterns. The algorithm that computes the phase mask is the core of CGH and is usually tailored to meet different applications. CGH for optical microscopy usually requires 3D accessibility (i.e., generating overlapping patterns along the $z$-axis) and micron-scale spatial precision. Here, we propose a CGH algorithm using an unsupervised generative model designed for optical microscopy to synthesize 3D selected illumination. The algorithm, named sparse deep CGH, is able to generate sparsely distributed points in a large 3D volume with higher contrast than conventional CGH algorithms.
Three-dimensional electron tomography is used to understand the structure and properties of samples in chemistry, materials science, geoscience, and biology. With the recent development of high-resolution detectors and algorithms that can account for multiple-scattering events, thicker samples can be examined at finer resolution, resulting in larger reconstruction volumes than previously possible. In this work, we propose a distributed computing framework that reconstructs large volumes by decomposing a projected tilt-series into smaller datasets such that sub-volumes can be simultaneously reconstructed on separate compute nodes using a cluster. We demonstrate our method by reconstructing a multiple-scattering layered clay (montmorillonite) sample at high resolution from a large field-of-view tilt-series phase contrast transmission electron microscopty dataset.
Fourier ptychographic microscopy is a computational imaging technique that provides quantitative phase information and high resolution over a large field-of-view. Although the technique presents numerous advantages over conventional microscopy, model mismatch due to unknown optical aberrations can significantly limit reconstruction quality. Many attempts to address this issue rely on embedding pupil recovery into the reconstruction algorithm. In this paper we demonstrate the limitations of a purely algorithmic approach and evaluate the merits of implementing a simple, dedicated calibration procedure. In simulations, we find that for a target sample reconstruction error, we can image without any aberration corrections up to a maximum aberration magnitude of $\lambda$/40. When we use algorithmic self-calibration, we can increase the aberration magnitude up to $\lambda$/10, and with our in situ speckle calibration technique, this working range is extended further to a maximum aberration magnitude of $\lambda$/3. Hence, one can trade-off complexity for accuracy by using a separate calibration process, which is particularly useful for larger aberrations.