Alert button
Picture for Javier Duarte

Javier Duarte

Alert button

Scalable neural network models and terascale datasets for particle-flow reconstruction

Sep 13, 2023
Joosep Pata, Eric Wulff, Farouk Mokhtar, David Southwick, Mengke Zhang, Maria Girone, Javier Duarte

We study scalable machine learning models for full event reconstruction in high-energy electron-positron collisions based on a highly granular detector simulation. Particle-flow (PF) reconstruction can be formulated as a supervised learning task using tracks and calorimeter clusters or hits. We compare a graph neural network and kernel-based transformer and demonstrate that both avoid quadratic memory allocation and computational cost while achieving realistic PF reconstruction. We show that hyperparameter tuning on a supercomputer significantly improves the physics performance of the models. We also demonstrate that the resulting model is highly portable across hardware processors, supporting Nvidia, AMD, and Intel Habana cards. Finally, we demonstrate that the model can be trained on highly granular inputs consisting of tracks and calorimeter hits, resulting in a competitive physics performance with the baseline. Datasets and software to reproduce the studies are published following the findable, accessible, interoperable, and reusable (FAIR) principles.

* 19 pages, 7 figures 
Viaarxiv icon

Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs

Jun 27, 2023
Shi-Yu Huang, Yun-Chen Yang, Yu-Ru Su, Bo-Cheng Lai, Javier Duarte, Scott Hauck, Shih-Chieh Hsu, Jin-Xuan Hu, Mark S. Neubauer

Figure 1 for Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs
Figure 2 for Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs
Figure 3 for Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs
Figure 4 for Low Latency Edge Classification GNN for Particle Trajectory Tracking on FPGAs

In-time particle trajectory reconstruction in the Large Hadron Collider is challenging due to the high collision rate and numerous particle hits. Using GNN (Graph Neural Network) on FPGA has enabled superior accuracy with flexible trajectory classification. However, existing GNN architectures have inefficient resource usage and insufficient parallelism for edge classification. This paper introduces a resource-efficient GNN architecture on FPGAs for low latency particle tracking. The modular architecture facilitates design scalability to support large graphs. Leveraging the geometric properties of hit detectors further reduces graph complexity and resource usage. Our results on Xilinx UltraScale+ VU9P demonstrate 1625x and 1574x performance improvement over CPU and GPU respectively.

Viaarxiv icon

Differentiable Earth Mover's Distance for Data Compression at the High-Luminosity LHC

Jun 07, 2023
Rohan Shenoy, Javier Duarte, Christian Herwig, James Hirschauer, Daniel Noonan, Maurizio Pierini, Nhan Tran, Cristina Mantilla Suarez

Figure 1 for Differentiable Earth Mover's Distance for Data Compression at the High-Luminosity LHC
Figure 2 for Differentiable Earth Mover's Distance for Data Compression at the High-Luminosity LHC
Figure 3 for Differentiable Earth Mover's Distance for Data Compression at the High-Luminosity LHC
Figure 4 for Differentiable Earth Mover's Distance for Data Compression at the High-Luminosity LHC

The Earth mover's distance (EMD) is a useful metric for image recognition and classification, but its usual implementations are not differentiable or too slow to be used as a loss function for training other algorithms via gradient descent. In this paper, we train a convolutional neural network (CNN) to learn a differentiable, fast approximation of the EMD and demonstrate that it can be used as a substitute for computing-intensive EMD implementations. We apply this differentiable approximation in the training of an autoencoder-inspired neural network (encoder NN) for data compression at the high-luminosity LHC at CERN. The goal of this encoder NN is to compress the data while preserving the information related to the distribution of energy deposits in particle detectors. We demonstrate that the performance of our encoder NN trained using the differentiable EMD CNN surpasses that of training with loss functions based on mean squared error.

* 15 pages, 7 figures, submitted to Machine Learning: Science and Technology 
Viaarxiv icon

End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs

Apr 13, 2023
Javier Campos, Zhen Dong, Javier Duarte, Amir Gholami, Michael W. Mahoney, Jovan Mitrevski, Nhan Tran

Figure 1 for End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs
Figure 2 for End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs
Figure 3 for End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs
Figure 4 for End-to-end codesign of Hessian-aware quantized neural networks for FPGAs and ASICs

We develop an end-to-end workflow for the training and implementation of co-designed neural networks (NNs) for efficient field-programmable gate array (FPGA) and application-specific integrated circuit (ASIC) hardware. Our approach leverages Hessian-aware quantization (HAWQ) of NNs, the Quantized Open Neural Network Exchange (QONNX) intermediate representation, and the hls4ml tool flow for transpiling NNs into FPGA and ASIC firmware. This makes efficient NN implementations in hardware accessible to nonexperts, in a single open-sourced workflow that can be deployed for real-time machine learning applications in a wide range of scientific and industrial settings. We demonstrate the workflow in a particle physics application involving trigger decisions that must operate at the 40 MHz collision rate of the CERN Large Hadron Collider (LHC). Given the high collision rate, all data processing must be implemented on custom ASIC and FPGA hardware within a strict area and latency. Based on these constraints, we implement an optimized mixed-precision NN classifier for high-momentum particle jets in simulated LHC proton-proton collisions.

* 19 pages, 6 figures, 2 tables 
Viaarxiv icon

Progress towards an improved particle flow algorithm at CMS with machine learning

Mar 30, 2023
Farouk Mokhtar, Joosep Pata, Javier Duarte, Eric Wulff, Maurizio Pierini, Jean-Roch Vlimant

Figure 1 for Progress towards an improved particle flow algorithm at CMS with machine learning
Figure 2 for Progress towards an improved particle flow algorithm at CMS with machine learning
Figure 3 for Progress towards an improved particle flow algorithm at CMS with machine learning
Figure 4 for Progress towards an improved particle flow algorithm at CMS with machine learning

The particle-flow (PF) algorithm, which infers particles based on tracks and calorimeter clusters, is of central importance to event reconstruction in the CMS experiment at the CERN LHC, and has been a focus of development in light of planned Phase-2 running conditions with an increased pileup and detector granularity. In recent years, the machine learned particle-flow (MLPF) algorithm, a graph neural network that performs PF reconstruction, has been explored in CMS, with the possible advantages of directly optimizing for the physical quantities of interest, being highly reconfigurable to new conditions, and being a natural fit for deployment to heterogeneous accelerators. We discuss progress in CMS towards an improved implementation of the MLPF reconstruction, now optimized using generator/simulation-level particle information as the target for the first time. This paves the way to potentially improving the detector response in terms of physical quantities of interest. We describe the simulation-based training target, progress and studies on event-based loss terms, details on the model hyperparameter tuning, as well as physics validation with respect to the current PF algorithm in terms of high-level physical quantities such as the jet and missing transverse momentum resolutions. We find that the MLPF algorithm, trained on a generator/simulator level particle information for the first time, results in broadly compatible particle and jet reconstruction performance with the baseline PF, setting the stage for improving the physics performance by additional training statistics and model tuning.

* ACAT 2022: 21st International Workshop on Advanced Computing and Analysis Techniques in Physics Research  
* 7 pages, 4 Figures, 1 Table 
Viaarxiv icon

FAIR AI Models in High Energy Physics

Dec 21, 2022
Javier Duarte, Haoyang Li, Avik Roy, Ruike Zhu, E. A. Huerta, Daniel Diaz, Philip Harris, Raghav Kansal, Daniel S. Katz, Ishaan H. Kavoori, Volodymyr V. Kindratenko, Farouk Mokhtar, Mark S. Neubauer, Sang Eon Park, Melissa Quinnan, Roger Rusack, Zhizhen Zhao

Figure 1 for FAIR AI Models in High Energy Physics
Figure 2 for FAIR AI Models in High Energy Physics
Figure 3 for FAIR AI Models in High Energy Physics
Figure 4 for FAIR AI Models in High Energy Physics

The findable, accessible, interoperable, and reusable (FAIR) data principles have provided a framework for examining, evaluating, and improving how we share data with the aim of facilitating scientific discovery. Efforts have been made to generalize these principles to research software and other digital products. Artificial intelligence (AI) models -- algorithms that have been trained on data rather than explicitly programmed -- are an important target for this because of the ever-increasing pace with which AI is transforming scientific and engineering domains. In this paper, we propose a practical definition of FAIR principles for AI models and create a FAIR AI project template that promotes adherence to these principles. We demonstrate how to implement these principles using a concrete example from experimental high energy physics: a graph neural network for identifying Higgs bosons decaying to bottom quarks. We study the robustness of these FAIR AI models and their portability across hardware architectures and software frameworks, and report new insights on the interpretability of AI predictions by studying the interplay between FAIR datasets and AI models. Enabled by publishing FAIR AI models, these studies pave the way toward reliable and automated AI-driven scientific discovery.

* 32 pages, 8 figures, 9 tables 
Viaarxiv icon

Lorentz Group Equivariant Autoencoders

Dec 14, 2022
Zichun Hao, Raghav Kansal, Javier Duarte, Nadezda Chernyavskaya

Figure 1 for Lorentz Group Equivariant Autoencoders
Figure 2 for Lorentz Group Equivariant Autoencoders
Figure 3 for Lorentz Group Equivariant Autoencoders
Figure 4 for Lorentz Group Equivariant Autoencoders

There has been significant work recently in developing machine learning models in high energy physics (HEP), for tasks such as classification, simulation, and anomaly detection. Typically, these models are adapted from those designed for datasets in computer vision or natural language processing without necessarily incorporating inductive biases suited to HEP data, such as respecting its inherent symmetries. Such inductive biases can make the model more performant and interpretable, and reduce the amount of training data needed. To that end, we develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $\mathrm{SO}^+(3,1)$, with a latent space living in the representations of the group. We present our architecture and several experimental results on jets at the LHC and find it significantly outperforms a non-Lorentz-equivariant graph neural network baseline on compression and reconstruction, and anomaly detection. We also demonstrate the advantage of such an equivariant model in analyzing the latent space of the autoencoder, which can have a significant impact on the explainability of anomalies found by such black-box machine learning models.

* 7 pages, 6 figures, 4 tables, and a 2 page appendix 
Viaarxiv icon

On the Evaluation of Generative Models in High Energy Physics

Nov 18, 2022
Raghav Kansal, Anni Li, Javier Duarte, Nadezda Chernyavskaya, Maurizio Pierini, Breno Orzari, Thiago Tomei

Figure 1 for On the Evaluation of Generative Models in High Energy Physics
Figure 2 for On the Evaluation of Generative Models in High Energy Physics
Figure 3 for On the Evaluation of Generative Models in High Energy Physics
Figure 4 for On the Evaluation of Generative Models in High Energy Physics

There has been a recent explosion in research into machine-learning-based generative modeling to tackle computational challenges for simulations in high energy physics (HEP). In order to use such alternative simulators in practice, we need well defined metrics to compare different generative models and evaluate their discrepancy from the true distributions. We present the first systematic review and investigation into evaluation metrics and their sensitivity to failure modes of generative models, using the framework of two-sample goodness-of-fit testing, and their relevance and viability for HEP. Inspired by previous work in both physics and computer vision, we propose two new metrics, the Fr\'echet and kernel physics distances (FPD and KPD), and perform a variety of experiments measuring their performance on simple Gaussian-distributed, and simulated high energy jet datasets. We find FPD, in particular, to be the most sensitive metric to all alternative jet distributions tested and recommend its adoption, along with the KPD and Wasserstein distances between individual feature distributions, for evaluating generative models in HEP. We finally demonstrate the efficacy of these proposed metrics in evaluating and comparing a novel attention-based generative adversarial particle transformer to the state-of-the-art message-passing generative adversarial network jet simulation model.

* 11 pages, 5 figures, 3 tables, and a 3 page appenidx 
Viaarxiv icon