Alert button
Picture for Ankit Agrawal

Ankit Agrawal

Alert button

An Incremental Phase Mapping Approach for X-ray Diffraction Patterns using Binary Peak Representations

Nov 08, 2022
Dipendra Jha, K. V. L. V. Narayanachari, Ruifeng Zhang, Justin Liao, Denis T. Keane, Wei-keng Liao, Alok Choudhary, Yip-Wah Chung, Michael Bedzyk, Ankit Agrawal

Figure 1 for An Incremental Phase Mapping Approach for X-ray Diffraction Patterns using Binary Peak Representations
Figure 2 for An Incremental Phase Mapping Approach for X-ray Diffraction Patterns using Binary Peak Representations
Figure 3 for An Incremental Phase Mapping Approach for X-ray Diffraction Patterns using Binary Peak Representations
Figure 4 for An Incremental Phase Mapping Approach for X-ray Diffraction Patterns using Binary Peak Representations

Despite the huge advancement in knowledge discovery and data mining techniques, the X-ray diffraction (XRD) analysis process has mostly remained untouched and still involves manual investigation, comparison, and verification. Due to the large volume of XRD samples from high-throughput XRD experiments, it has become impossible for domain scientists to process them manually. Recently, they have started leveraging standard clustering techniques, to reduce the XRD pattern representations requiring manual efforts for labeling and verification. Nevertheless, these standard clustering techniques do not handle problem-specific aspects such as peak shifting, adjacent peaks, background noise, and mixed phases; hence, resulting in incorrect composition-phase diagrams that complicate further steps. Here, we leverage data mining techniques along with domain expertise to handle these issues. In this paper, we introduce an incremental phase mapping approach based on binary peak representations using a new threshold based fuzzy dissimilarity measure. The proposed approach first applies an incremental phase computation algorithm on discrete binary peak representation of XRD samples, followed by hierarchical clustering or manual merging of similar pure phases to obtain the final composition-phase diagram. We evaluate our method on the composition space of two ternary alloy systems- Co-Ni-Ta and Co-Ti-Ta. Our results are verified by domain scientists and closely resembles the manually computed ground-truth composition-phase diagrams. The proposed approach takes us closer towards achieving the goal of complete end-to-end automated XRD analysis.

* Accepted and presented at the International Workshop on Domain-Driven Data Mining (DDDM) as a part of the SIAM International Conference on Data Mining (SDM 2021). Contains 11 pages and 5 figures 
Viaarxiv icon

Extending MAPE-K to support Human-Machine Teaming

Mar 24, 2022
Jane Cleland-Huang, Ankit Agrawal, Michael Vierhauser, Michael Murphy, Mike Prieto

Figure 1 for Extending MAPE-K to support Human-Machine Teaming
Figure 2 for Extending MAPE-K to support Human-Machine Teaming
Figure 3 for Extending MAPE-K to support Human-Machine Teaming
Figure 4 for Extending MAPE-K to support Human-Machine Teaming

The MAPE-K feedback loop has been established as the primary reference model for self-adaptive and autonomous systems in domains such as autonomous driving, robotics, and Cyber-Physical Systems. At the same time, the Human Machine Teaming (HMT) paradigm is designed to promote partnerships between humans and autonomous machines. It goes far beyond the degree of collaboration expected in human-on-the-loop and human-in-the-loop systems and emphasizes interactions, partnership, and teamwork between humans and machines. However, while MAPE-K enables fully autonomous behavior, it does not explicitly address the interactions between humans and machines as intended by HMT. In this paper, we present the MAPE-K-HMT framework which augments the traditional MAPE-K loop with support for HMT. We identify critical human-machine teaming factors and describe the infrastructure needed across the various phases of the MAPE-K loop in order to effectively support HMT. This includes runtime models that are constructed and populated dynamically across monitoring, analysis, planning, and execution phases to support human-machine partnerships. We illustrate MAPE-K-HMT using examples from an autonomous multi-UAV emergency response system, and present guidelines for integrating HMT into MAPE-K.

* Final published version appearing in 17th Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2022) 
Viaarxiv icon

Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems

Sep 05, 2021
Ankit Agrawal, Jane Cleland-Huang

Figure 1 for Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems
Figure 2 for Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems
Figure 3 for Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems
Figure 4 for Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems

Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous, mobile, and require intermittent interventions, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with too much information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior or autonomy of multiple robots creates a design tension that demands careful investigation. This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and provide initial guidelines for designing the HotL multi-sUAS interface.

* 10+2 pages; 6 Figures; 3 Tables; Accepted for publication at HCOMP'21 
Viaarxiv icon

Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems

Mar 28, 2021
Sophia Abraham, Zachariah Carmichael, Sreya Banerjee, Rosaura VidalMata, Ankit Agrawal, Md Nafee Al Islam, Walter Scheirer, Jane Cleland-Huang

Figure 1 for Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems
Figure 2 for Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems
Figure 3 for Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems
Figure 4 for Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems

Computer vision approaches are widely used by autonomous robotic systems to sense the world around them and to guide their decision making as they perform diverse tasks such as collision avoidance, search and rescue, and object manipulation. High accuracy is critical, particularly for Human-on-the-loop (HoTL) systems where decisions are made autonomously by the system, and humans play only a supervisory role. Failures of the vision model can lead to erroneous decisions with potentially life or death consequences. In this paper, we propose a solution based upon adaptive autonomy levels, whereby the system detects loss of reliability of these models and responds by temporarily lowering its own autonomy levels and increasing engagement of the human in the decision-making process. Our solution is applicable for vision-based tasks in which humans have time to react and provide guidance. When implemented, our approach would estimate the reliability of the vision task by considering uncertainty in its model, and by performing covariate analysis to determine when the current operating environment is ill-matched to the model's training data. We provide examples from DroneResponse, in which small Unmanned Aerial Systems are deployed for Emergency Response missions, and show how the vision model's reliability would be used in addition to confidence scores to drive and specify the behavior and adaptation of the system's autonomy. This workshop paper outlines our proposed approach and describes open challenges at the intersection of Computer Vision and Software Engineering for the safe and reliable deployment of vision models in the decision making of autonomous systems.

Viaarxiv icon

A General Framework Combining Generative Adversarial Networks and Mixture Density Networks for Inverse Modeling in Microstructural Materials Design

Jan 26, 2021
Zijiang Yang, Dipendra Jha, Arindam Paul, Wei-keng Liao, Alok Choudhary, Ankit Agrawal

Figure 1 for A General Framework Combining Generative Adversarial Networks and Mixture Density Networks for Inverse Modeling in Microstructural Materials Design
Figure 2 for A General Framework Combining Generative Adversarial Networks and Mixture Density Networks for Inverse Modeling in Microstructural Materials Design
Figure 3 for A General Framework Combining Generative Adversarial Networks and Mixture Density Networks for Inverse Modeling in Microstructural Materials Design
Figure 4 for A General Framework Combining Generative Adversarial Networks and Mixture Density Networks for Inverse Modeling in Microstructural Materials Design

Microstructural materials design is one of the most important applications of inverse modeling in materials science. Generally speaking, there are two broad modeling paradigms in scientific applications: forward and inverse. While the forward modeling estimates the observations based on known parameters, the inverse modeling attempts to infer the parameters given the observations. Inverse problems are usually more critical as well as difficult in scientific applications as they seek to explore the parameters that cannot be directly observed. Inverse problems are used extensively in various scientific fields, such as geophysics, healthcare and materials science. However, it is challenging to solve inverse problems, because they usually need to learn a one-to-many non-linear mapping, and also require significant computing time, especially for high-dimensional parameter space. Further, inverse problems become even more difficult to solve when the dimension of input (i.e. observation) is much lower than that of output (i.e. parameters). In this work, we propose a framework consisting of generative adversarial networks and mixture density networks for inverse modeling, and it is evaluated on a materials science dataset for microstructural materials design. Compared with baseline methods, the results demonstrate that the proposed framework can overcome the above-mentioned challenges and produce multiple promising solutions in an efficient manner.

Viaarxiv icon

Art Style Classification with Self-Trained Ensemble of AutoEncoding Transformations

Dec 06, 2020
Akshay Joshi, Ankit Agrawal, Sushmita Nair

Figure 1 for Art Style Classification with Self-Trained Ensemble of AutoEncoding Transformations
Figure 2 for Art Style Classification with Self-Trained Ensemble of AutoEncoding Transformations
Figure 3 for Art Style Classification with Self-Trained Ensemble of AutoEncoding Transformations
Figure 4 for Art Style Classification with Self-Trained Ensemble of AutoEncoding Transformations

The artistic style of a painting is a rich descriptor that reveals both visual and deep intrinsic knowledge about how an artist uniquely portrays and expresses their creative vision. Accurate categorization of paintings across different artistic movements and styles is critical for large-scale indexing of art databases. However, the automatic extraction and recognition of these highly dense artistic features has received little to no attention in the field of computer vision research. In this paper, we investigate the use of deep self-supervised learning methods to solve the problem of recognizing complex artistic styles with high intra-class and low inter-class variation. Further, we outperform existing approaches by almost 20% on a highly class imbalanced WikiArt dataset with 27 art categories. To achieve this, we train the EnAET semi-supervised learning model (Wang et al., 2019) with limited annotated data samples and supplement it with self-supervised representations learned from an ensemble of spatial and non-spatial transformations.

* 6 
Viaarxiv icon

A real-time iterative machine learning approach for temperature profile prediction in additive manufacturing processes

Aug 09, 2019
Arindam Paul, Mojtaba Mozaffar, Zijiang Yang, Wei-keng Liao, Alok Choudhary, Jian Cao, Ankit Agrawal

Figure 1 for A real-time iterative machine learning approach for temperature profile prediction in additive manufacturing processes
Figure 2 for A real-time iterative machine learning approach for temperature profile prediction in additive manufacturing processes
Figure 3 for A real-time iterative machine learning approach for temperature profile prediction in additive manufacturing processes
Figure 4 for A real-time iterative machine learning approach for temperature profile prediction in additive manufacturing processes

Additive Manufacturing (AM) is a manufacturing paradigm that builds three-dimensional objects from a computer-aided design model by successively adding material layer by layer. AM has become very popular in the past decade due to its utility for fast prototyping such as 3D printing as well as manufacturing functional parts with complex geometries using processes such as laser metal deposition that would be difficult to create using traditional machining. As the process for creating an intricate part for an expensive metal such as Titanium is prohibitive with respect to cost, computational models are used to simulate the behavior of AM processes before the experimental run. However, as the simulations are computationally costly and time-consuming for predicting multiscale multi-physics phenomena in AM, physics-informed data-driven machine-learning systems for predicting the behavior of AM processes are immensely beneficial. Such models accelerate not only multiscale simulation tools but also empower real-time control systems using in-situ data. In this paper, we design and develop essential components of a scientific framework for developing a data-driven model-based real-time control system. Finite element methods are employed for solving time-dependent heat equations and developing the database. The proposed framework uses extremely randomized trees - an ensemble of bagged decision trees as the regression algorithm iteratively using temperatures of prior voxels and laser information as inputs to predict temperatures of subsequent voxels. The models achieve mean absolute percentage errors below 1% for predicting temperature profiles for AM processes. The code is made available for the research community at https://github.com/paularindam/ml-iter-additive.

* 6th IEEE International Conference on Data Science and Advanced Analytics (DSAA), 2019  
* 10 pages, 8 figures 
Viaarxiv icon

IRNet: A General Purpose Deep Residual Regression Framework for Materials Discovery

Jul 07, 2019
Dipendra Jha, Logan Ward, Zijiang Yang, Christopher Wolverton, Ian Foster, Wei-keng Liao, Alok Choudhary, Ankit Agrawal

Figure 1 for IRNet: A General Purpose Deep Residual Regression Framework for Materials Discovery
Figure 2 for IRNet: A General Purpose Deep Residual Regression Framework for Materials Discovery
Figure 3 for IRNet: A General Purpose Deep Residual Regression Framework for Materials Discovery
Figure 4 for IRNet: A General Purpose Deep Residual Regression Framework for Materials Discovery

Materials discovery is crucial for making scientific advances in many domains. Collections of data from experiments and first-principle computations have spurred interest in applying machine learning methods to create predictive models capable of mapping from composition and crystal structures to materials properties. Generally, these are regression problems with the input being a 1D vector composed of numerical attributes representing the material composition and/or crystal structure. While neural networks consisting of fully connected layers have been applied to such problems, their performance often suffers from the vanishing gradient problem when network depth is increased. In this paper, we study and propose design principles for building deep regression networks composed of fully connected layers with numerical vectors as input. We introduce a novel deep regression network with individual residual learning, IRNet, that places shortcut connections after each layer so that each layer learns the residual mapping between its output and input. We use the problem of learning properties of inorganic materials from numerical attributes derived from material composition and/or crystal structure to compare IRNet's performance against that of other machine learning techniques. Using multiple datasets from the Open Quantum Materials Database (OQMD) and Materials Project for training and evaluation, we show that IRNet provides significantly better prediction performance than the state-of-the-art machine learning approaches currently used by domain scientists. We also show that IRNet's use of individual residual learning leads to better convergence during the training phase than when shortcut connections are between multi-layer stacks while maintaining the same number of parameters.

* 9 pages, under publication at KDD'19 
Viaarxiv icon

Transfer Learning Using Ensemble Neural Networks for Organic Solar Cell Screening

Mar 30, 2019
Arindam Paul, Dipendra Jha, Reda Al-Bahrani, Wei-keng Liao, Alok Choudhary, Ankit Agrawal

Figure 1 for Transfer Learning Using Ensemble Neural Networks for Organic Solar Cell Screening
Figure 2 for Transfer Learning Using Ensemble Neural Networks for Organic Solar Cell Screening
Figure 3 for Transfer Learning Using Ensemble Neural Networks for Organic Solar Cell Screening
Figure 4 for Transfer Learning Using Ensemble Neural Networks for Organic Solar Cell Screening

Organic Solar Cells are a promising technology for solving the clean energy crisis in the world. However, generating candidate chemical compounds for solar cells is a time-consuming process requiring thousands of hours of laboratory analysis. For a solar cell, the most important property is the power conversion efficiency which is dependent on the highest occupied molecular orbitals (HOMO) values of the donor molecules. Recently, machine learning techniques have proved to be very useful in building predictive models for HOMO values of donor structures of Organic Photovoltaic Cells (OPVs). Since experimental datasets are limited in size, current machine learning models are trained on data derived from calculations based on density functional theory (DFT). Molecular line notations such as SMILES or InChI are popular input representations for describing the molecular structure of donor molecules. The two types of line representations encode different information, such as SMILES defines the bond types while InChi defines protonation. In this work, we present an ensemble deep neural network architecture, called SINet, which harnesses both the SMILES and InChI molecular representations to predict HOMO values and leverage the potential of transfer learning from a sizeable DFT-computed dataset- Harvard CEP to build more robust predictive models for relatively smaller HOPV datasets. Harvard CEP dataset contains molecular structures and properties for 2.3 million candidate donor structures for OPV while HOPV contains DFT-computed and experimental values of 350 and 243 molecules respectively. Our results demonstrate significant performance improvement from the use of transfer learning and leveraging both molecular representations.

* 8 pages, 11 figures, International Joint Conference on Neural Networks 
Viaarxiv icon