Abstract:Four-dimensional scanning transmission electron microscopy (4D-STEM) enables mapping of diffraction information with nanometer-scale spatial resolution, offering detailed insight into local structure, orientation, and strain. However, as data dimensionality and sampling density increase, particularly for in situ scanning diffraction experiments (5D-STEM), robust segmentation of spatially coherent regions becomes essential for efficient and physically meaningful analysis. Here, we introduce a clustering framework that identifies crystallographically distinct domains from 4D-STEM datasets. By using local diffraction-pattern similarity as a metric, the method extracts closed contours delineating regions of coherent structural behavior. This approach produces cluster-averaged diffraction patterns that improve signal-to-noise and reduce data volume by orders of magnitude, enabling rapid and accurate orientation, phase, and strain mapping. We demonstrate the applicability of this approach to in situ liquid-cell 4D-STEM data of gold nanoparticle growth. Our method provides a scalable and generalizable route for spatially coherent segmentation, data compression, and quantitative structure-strain mapping across diverse 4D-STEM modalities. The full analysis code and example workflows are publicly available to support reproducibility and reuse.
Abstract:Electron ptychography enables dose-efficient atomic-resolution imaging, but conventional reconstruction algorithms suffer from noise sensitivity, slow convergence, and extensive manual hyperparameter tuning for regularization, especially in three-dimensional multislice reconstructions. We introduce a deep generative prior (DGP) framework for electron ptychography that uses the implicit regularization of convolutional neural networks to address these challenges. Two DGPs parameterize the complex-valued sample and probe within an automatic-differentiation mixed-state multislice forward model. Compared to pixel-based reconstructions, DGPs offer four key advantages: (i) greater noise robustness and improved information limits at low dose; (ii) markedly faster convergence, especially at low spatial frequencies; (iii) improved depth regularization; and (iv) minimal user-specified regularization. The DGP framework promotes spatial coherence and suppresses high-frequency noise without extensive tuning, and a pre-training strategy stabilizes reconstructions. Our results establish DGP-enabled ptychography as a robust approach that reduces expertise barriers and computational cost, delivering robust, high-resolution imaging across diverse materials and biological systems.