Alert button
Picture for Arpan Biswas

Arpan Biswas

Alert button

Human-in-the-loop: The future of Machine Learning in Automated Electron Microscopy

Oct 08, 2023
Sergei V. Kalinin, Yongtao Liu, Arpan Biswas, Gerd Duscher, Utkarsh Pratiush, Kevin Roccapriore, Maxim Ziatdinov, Rama Vasudevan

Figure 1 for Human-in-the-loop: The future of Machine Learning in Automated Electron Microscopy
Figure 2 for Human-in-the-loop: The future of Machine Learning in Automated Electron Microscopy
Figure 3 for Human-in-the-loop: The future of Machine Learning in Automated Electron Microscopy
Figure 4 for Human-in-the-loop: The future of Machine Learning in Automated Electron Microscopy

Machine learning methods are progressively gaining acceptance in the electron microscopy community for de-noising, semantic segmentation, and dimensionality reduction of data post-acquisition. The introduction of the APIs by major instrument manufacturers now allows the deployment of ML workflows in microscopes, not only for data analytics but also for real-time decision-making and feedback for microscope operation. However, the number of use cases for real-time ML remains remarkably small. Here, we discuss some considerations in designing ML-based active experiments and pose that the likely strategy for the next several years will be human-in-the-loop automated experiments (hAE). In this paradigm, the ML learning agent directly controls beam position and image and spectroscopy acquisition functions, and human operator monitors experiment progression in real- and feature space of the system and tunes the policies of the ML agent to steer the experiment towards specific objectives.

Viaarxiv icon

A dynamic Bayesian optimized active recommender system for curiosity-driven Human-in-the-loop automated experiments

Apr 05, 2023
Arpan Biswas, Yongtao Liu, Nicole Creange, Yu-Chen Liu, Stephen Jesse, Jan-Chi Yang, Sergei V. Kalinin, Maxim A. Ziatdinov, Rama K. Vasudevan

Figure 1 for A dynamic Bayesian optimized active recommender system for curiosity-driven Human-in-the-loop automated experiments
Figure 2 for A dynamic Bayesian optimized active recommender system for curiosity-driven Human-in-the-loop automated experiments
Figure 3 for A dynamic Bayesian optimized active recommender system for curiosity-driven Human-in-the-loop automated experiments
Figure 4 for A dynamic Bayesian optimized active recommender system for curiosity-driven Human-in-the-loop automated experiments

Optimization of experimental materials synthesis and characterization through active learning methods has been growing over the last decade, with examples ranging from measurements of diffraction on combinatorial alloys at synchrotrons, to searches through chemical space with automated synthesis robots for perovskites. In virtually all cases, the target property of interest for optimization is defined apriori with limited human feedback during operation. In contrast, here we present the development of a new type of human in the loop experimental workflow, via a Bayesian optimized active recommender system (BOARS), to shape targets on the fly, employing human feedback. We showcase examples of this framework applied to pre-acquired piezoresponse force spectroscopy of a ferroelectric thin film, and then implement this in real time on an atomic force microscope, where the optimization proceeds to find symmetric piezoresponse amplitude hysteresis loops. It is found that such features appear more affected by subsurface defects than the local domain structure. This work shows the utility of human-augmented machine learning approaches for curiosity-driven exploration of systems across experimental domains. The analysis reported here is summarized in Colab Notebook for the purpose of tutorial and application to other data: https://github.com/arpanbiswas52/varTBO

* 7 figures in main text, 3 figures in Supp Material 
Viaarxiv icon

Combining Variational Autoencoders and Physical Bias for Improved Microscopy Data Analysis

Feb 08, 2023
Arpan Biswas, Maxim Ziatdinov, Sergei V. Kalinin

Figure 1 for Combining Variational Autoencoders and Physical Bias for Improved Microscopy Data Analysis
Figure 2 for Combining Variational Autoencoders and Physical Bias for Improved Microscopy Data Analysis
Figure 3 for Combining Variational Autoencoders and Physical Bias for Improved Microscopy Data Analysis
Figure 4 for Combining Variational Autoencoders and Physical Bias for Improved Microscopy Data Analysis

Electron and scanning probe microscopy produce vast amounts of data in the form of images or hyperspectral data, such as EELS or 4D STEM, that contain information on a wide range of structural, physical, and chemical properties of materials. To extract valuable insights from these data, it is crucial to identify physically separate regions in the data, such as phases, ferroic variants, and boundaries between them. In order to derive an easily interpretable feature analysis, combining with well-defined boundaries in a principled and unsupervised manner, here we present a physics augmented machine learning method which combines the capability of Variational Autoencoders to disentangle factors of variability within the data and the physics driven loss function that seeks to minimize the total length of the discontinuities in images corresponding to latent representations. Our method is applied to various materials, including NiO-LSMO, BiFeO3, and graphene. The results demonstrate the effectiveness of our approach in extracting meaningful information from large volumes of imaging data. The fully notebook containing implementation of the code and analysis workflow is available at https://github.com/arpanbiswas52/PaperNotebooks

* 20 pages, 7 figures in main text, 4 figures in Supp Mat 
Viaarxiv icon

Optimizing Training Trajectories in Variational Autoencoders via Latent Bayesian Optimization Approach

Jun 30, 2022
Arpan Biswas, Rama Vasudevan, Maxim Ziatdinov, Sergei V. Kalinin

Figure 1 for Optimizing Training Trajectories in Variational Autoencoders via Latent Bayesian Optimization Approach
Figure 2 for Optimizing Training Trajectories in Variational Autoencoders via Latent Bayesian Optimization Approach
Figure 3 for Optimizing Training Trajectories in Variational Autoencoders via Latent Bayesian Optimization Approach
Figure 4 for Optimizing Training Trajectories in Variational Autoencoders via Latent Bayesian Optimization Approach

Unsupervised and semi-supervised ML methods such as variational autoencoders (VAE) have become widely adopted across multiple areas of physics, chemistry, and materials sciences due to their capability in disentangling representations and ability to find latent manifolds for classification and regression of complex experimental data. Like other ML problems, VAEs require hyperparameter tuning, e.g., balancing the Kullback Leibler (KL) and reconstruction terms. However, the training process and resulting manifold topology and connectivity depend not only on hyperparameters, but also their evolution during training. Because of the inefficiency of exhaustive search in a high-dimensional hyperparameter space for the expensive to train models, here we explored a latent Bayesian optimization (zBO) approach for the hyperparameter trajectory optimization for the unsupervised and semi-supervised ML and demonstrate for joint-VAE with rotational invariances. We demonstrate an application of this method for finding joint discrete and continuous rotationally invariant representations for MNIST and experimental data of a plasmonic nanoparticles material system. The performance of the proposed approach has been discussed extensively, where it allows for any high dimensional hyperparameter tuning or trajectory optimization of other ML models.

* 32 pages, including 11 figures in the main text and Appendixes with 2 figures. arXiv admin note: text overlap with arXiv:2108.12889 
Viaarxiv icon

A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box Design Problems

Oct 16, 2021
Arpan Biswas, Claudio Fuentes, Christopher Hoyle

Figure 1 for A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box Design Problems
Figure 2 for A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box Design Problems
Figure 3 for A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box Design Problems
Figure 4 for A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box Design Problems

We propose a nested weighted Tchebycheff Multi-objective Bayesian optimization framework where we build a regression model selection procedure from an ensemble of models, towards better estimation of the uncertain parameters of the weighted-Tchebycheff expensive black-box multi-objective function. In existing work, a weighted Tchebycheff MOBO approach has been demonstrated which attempts to estimate the unknown utopia in formulating acquisition function, through calibration using a priori selected regression model. However, the existing MOBO model lacks flexibility in selecting the appropriate regression models given the guided sampled data and therefore, can under-fit or over-fit as the iterations of the MOBO progress, reducing the overall MOBO performance. As it is too complex to a priori guarantee a best model in general, this motivates us to consider a portfolio of different families of predictive models fitted with current training data, guided by the WTB MOBO; the best model is selected following a user-defined prediction root mean-square-error-based approach. The proposed approach is implemented in optimizing a multi-modal benchmark problem and a thin tube design under constant loading of temperature-pressure, with minimizing the risk of creep-fatigue failure and design cost. Finally, the nested weighted Tchebycheff MOBO model performance is compared with different MOBO frameworks with respect to accuracy in parameter estimation, Pareto-optimal solutions and function evaluation cost. This method is generalized enough to consider different families of predictive models in the portfolio for best model selection, where the overall design architecture allows for solving any high-dimensional (multiple functions) complex black-box problems and can be extended to any other global criterion multi-objective optimization methods where prior knowledge of utopia is required.

* 35 pages, 8 figures in main text and 2 figures in supplementary 
Viaarxiv icon