Alert button
Picture for Vishnu Jejjala

Vishnu Jejjala

Alert button

Machine Learned Calabi--Yau Metrics and Curvature

Nov 17, 2022
Per Berglund, Giorgi Butbaia, Tristan Hübsch, Vishnu Jejjala, Damián Mayorga Peña, Challenger Mishra, Justin Tan

Figure 1 for Machine Learned Calabi--Yau Metrics and Curvature
Figure 2 for Machine Learned Calabi--Yau Metrics and Curvature
Figure 3 for Machine Learned Calabi--Yau Metrics and Curvature
Figure 4 for Machine Learned Calabi--Yau Metrics and Curvature

Finding Ricci-flat (Calabi--Yau) metrics is a long standing problem in geometry with deep implications for string theory and phenomenology. A new attack on this problem uses neural networks to engineer approximations to the Calabi--Yau metric within a given K\"ahler class. In this paper we investigate numerical Ricci-flat metrics over smooth and singular K3 surfaces and Calabi--Yau threefolds. Using these Ricci-flat metric approximations for the Cefal\'u and Dwork family of quartic twofolds and the Dwork family of quintic threefolds, we study characteristic forms on these geometries. Using persistent homology, we show that high curvature regions of the manifolds form clusters near the singular points, but also elsewhere. For our neural network approximations, we observe a Bogomolov--Yau type inequality $3c_2 \geq c_1^2$ and observe an identity when our geometries have isolated $A_1$ type singularities. We sketch a proof that $\chi(X~\smallsetminus~\mathrm{Sing}\,{X}) + 2~|\mathrm{Sing}\,{X}| = 24$ also holds for our numerical approximations.

* 36 pages, 21 figures, 7 tables, 2 appendices 
Viaarxiv icon

Towards Quantum Advantage on Noisy Quantum Computers

Sep 27, 2022
Ismail Yunus Akhalwaya, Shashanka Ubaru, Kenneth L. Clarkson, Mark S. Squillante, Vishnu Jejjala, Yang-Hui He, Kugendran Naidoo, Vasileios Kalantzis, Lior Horesh

Figure 1 for Towards Quantum Advantage on Noisy Quantum Computers
Figure 2 for Towards Quantum Advantage on Noisy Quantum Computers
Figure 3 for Towards Quantum Advantage on Noisy Quantum Computers
Figure 4 for Towards Quantum Advantage on Noisy Quantum Computers

Topological data analysis (TDA) is a powerful technique for extracting complex and valuable shape-related summaries of high-dimensional data. However, the computational demands of classical TDA algorithms are exorbitant, and quickly become impractical for high-order characteristics. Quantum computing promises exponential speedup for certain problems. Yet, many existing quantum algorithms with notable asymptotic speedups require a degree of fault tolerance that is currently unavailable. In this paper, we present NISQ-TDA, the first fully implemented end-to-end quantum machine learning algorithm needing only a linear circuit-depth, that is applicable to non-handcrafted high-dimensional classical data, with potential speedup under stringent conditions. The algorithm neither suffers from the data-loading problem nor does it need to store the input data on the quantum computer explicitly. Our approach includes three key innovations: (a) an efficient realization of the full boundary operator as a sum of Pauli operators; (b) a quantum rejection sampling and projection approach to restrict a uniform superposition to the simplices of the desired order in the complex; and (c) a stochastic rank estimation method to estimate the topological features in the form of approximate Betti numbers. We present theoretical results that establish additive error guarantees for NISQ-TDA, and the circuit and computational time and depth complexities for exponentially scaled output estimates, up to the error tolerance. The algorithm was successfully executed on quantum computing devices, as well as on noisy quantum simulators, applied to small datasets. Preliminary empirical results suggest that the algorithm is robust to noise.

* This paper is a follow up to arXiv:2108.02811 with additional results 
Viaarxiv icon

Exponential advantage on noisy quantum computers

Sep 19, 2022
Ismail Yunus Akhalwaya, Shashanka Ubaru, Kenneth L. Clarkson, Mark S. Squillante, Vishnu Jejjala, Yang-Hui He, Kugendran Naidoo, Vasileios Kalantzis, Lior Horesh

Figure 1 for Exponential advantage on noisy quantum computers
Figure 2 for Exponential advantage on noisy quantum computers
Figure 3 for Exponential advantage on noisy quantum computers
Figure 4 for Exponential advantage on noisy quantum computers

Quantum computing offers the potential of exponential speedup over classical computation for certain problems. However, many of the existing algorithms with provable speedups require currently unavailable fault-tolerant quantum computers. We present NISQ-TDA, the first fully implemented quantum machine learning algorithm with provable exponential speedup on arbitrary classical (non-handcrafted) data and needing only a linear circuit depth. We report the successful execution of our NISQ-TDA algorithm, applied to small datasets run on quantum computing devices, as well as on noisy quantum simulators. We empirically confirm that the algorithm is robust to noise, and provide target depths and noise levels to realize near-term, non-fault-tolerant quantum advantage on real-world problems. Our unique data-loading projection method is the main source of noise robustness, introducing a new self-correcting data-loading approach.

* arXiv admin note: substantial text overlap with arXiv:2108.02811 
Viaarxiv icon

Identifying equivalent Calabi--Yau topologies: A discrete challenge from math and physics for machine learning

Feb 15, 2022
Vishnu Jejjala, Washington Taylor, Andrew Turner

Figure 1 for Identifying equivalent Calabi--Yau topologies: A discrete challenge from math and physics for machine learning
Figure 2 for Identifying equivalent Calabi--Yau topologies: A discrete challenge from math and physics for machine learning
Figure 3 for Identifying equivalent Calabi--Yau topologies: A discrete challenge from math and physics for machine learning

We review briefly the characteristic topological data of Calabi--Yau threefolds and focus on the question of when two threefolds are equivalent through related topological data. This provides an interesting test case for machine learning methodology in discrete mathematics problems motivated by physics.

* 6 pages, 3 figures; Contribution to proceedings of 2021 Nankai symposium on Mathematical Dialogues in celebration of S. S. Chern's 110th anniversary 
Viaarxiv icon

Machine Learning Kreuzer--Skarke Calabi--Yau Threefolds

Dec 16, 2021
Per Berglund, Ben Campbell, Vishnu Jejjala

Using a fully connected feedforward neural network we study topological invariants of a class of Calabi--Yau manifolds constructed as hypersurfaces in toric varieties associated with reflexive polytopes from the Kreuzer--Skarke database. In particular, we find the existence of a simple expression for the Euler number that can be learned in terms of limited data extracted from the polytope and its dual.

* 16 pages, 4 figures 
Viaarxiv icon

Learning knot invariants across dimensions

Nov 30, 2021
Jessica Craven, Mark Hughes, Vishnu Jejjala, Arjun Kar

Figure 1 for Learning knot invariants across dimensions
Figure 2 for Learning knot invariants across dimensions
Figure 3 for Learning knot invariants across dimensions
Figure 4 for Learning knot invariants across dimensions

We use deep neural networks to machine learn correlations between knot invariants in various dimensions. The three-dimensional invariant of interest is the Jones polynomial $J(q)$, and the four-dimensional invariants are the Khovanov polynomial $\text{Kh}(q,t)$, smooth slice genus $g$, and Rasmussen's $s$-invariant. We find that a two-layer feed-forward neural network can predict $s$ from $\text{Kh}(q,-q^{-4})$ with greater than $99\%$ accuracy. A theoretical explanation for this performance exists in knot theory via the now disproven knight move conjecture, which is obeyed by all knots in our dataset. More surprisingly, we find similar performance for the prediction of $s$ from $\text{Kh}(q,-q^{-2})$, which suggests a novel relationship between the Khovanov and Lee homology theories of a knot. The network predicts $g$ from $\text{Kh}(q,t)$ with similarly high accuracy, and we discuss the extent to which the machine is learning $s$ as opposed to $g$, since there is a general inequality $|s| \leq 2g$. The Jones polynomial, as a three-dimensional invariant, is not obviously related to $s$ or $g$, but the network achieves greater than $95\%$ accuracy in predicting either from $J(q)$. Moreover, similar accuracy can be achieved by evaluating $J(q)$ at roots of unity. This suggests a relationship with $SU(2)$ Chern--Simons theory, and we review the gauge theory construction of Khovanov homology which may be relevant for explaining the network's performance.

* 35 pages, 6 figures 
Viaarxiv icon

Neural Network Approximations for Calabi-Yau Metrics

Jan 27, 2021
Vishnu Jejjala, Damian Kaloni Mayorga Pena, Challenger Mishra

Figure 1 for Neural Network Approximations for Calabi-Yau Metrics
Figure 2 for Neural Network Approximations for Calabi-Yau Metrics
Figure 3 for Neural Network Approximations for Calabi-Yau Metrics
Figure 4 for Neural Network Approximations for Calabi-Yau Metrics

Ricci flat metrics for Calabi-Yau threefolds are not known analytically. In this work, we employ techniques from machine learning to deduce numerical flat metrics for the Fermat quintic, for the Dwork quintic, and for the Tian-Yau manifold. This investigation employs a single neural network architecture that is capable of approximating Ricci flat Kaehler metrics for several Calabi-Yau manifolds of dimensions two and three. We show that measures that assess the Ricci flatness of the geometry decrease after training by three orders of magnitude. This is corroborated on the validation set, where the improvement is more modest. Finally, we demonstrate that discrete symmetries of manifolds can be learned in the process of learning the metric.

* v2: 42 pages, figures improved, discrete symmetries section added, discussions enhanced, references added 
Viaarxiv icon

Disentangling a Deep Learned Volume Formula

Dec 07, 2020
Jessica Craven, Vishnu Jejjala, Arjun Kar

Figure 1 for Disentangling a Deep Learned Volume Formula
Figure 2 for Disentangling a Deep Learned Volume Formula
Figure 3 for Disentangling a Deep Learned Volume Formula
Figure 4 for Disentangling a Deep Learned Volume Formula

We present a simple phenomenological formula which approximates the hyperbolic volume of a knot using only a single evaluation of its Jones polynomial at a root of unity. The average error is just 2.86% on the first 1.7 million knots, which represents a large improvement over previous formulas of this kind. To find the approximation formula, we use layer-wise relevance propagation to reverse engineer a black box neural network which achieves a similar average error for the same approximation task when trained on 10% of the total dataset. The particular roots of unity which appear in our analysis cannot be written as $e^{2\pi i / (k+2)}$ with integer $k$; therefore, the relevant Jones polynomial evaluations are not given by unknot-normalized expectation values of Wilson loop operators in conventional $SU(2)$ Chern-Simons theory with level $k$. Instead, they correspond to an analytic continuation of such expectation values to fractional level. We briefly review the continuation procedure and comment on the presence of certain Lefschetz thimbles, to which our approximation formula is sensitive, in the analytically continued Chern-Simons integration cycle.

* 26 + 19 pages, 15 figures 
Viaarxiv icon

Baryons from Mesons: A Machine Learning Perspective

Mar 23, 2020
Yarin Gal, Vishnu Jejjala, Damian Kaloni Mayorga Pena, Challenger Mishra

Figure 1 for Baryons from Mesons: A Machine Learning Perspective
Figure 2 for Baryons from Mesons: A Machine Learning Perspective
Figure 3 for Baryons from Mesons: A Machine Learning Perspective
Figure 4 for Baryons from Mesons: A Machine Learning Perspective

Quantum chromodynamics (QCD) is the theory of the strong interaction. The fundamental particles of QCD, quarks and gluons, carry colour charge and form colourless bound states at low energies. The hadronic bound states of primary interest to us are the mesons and the baryons. From knowledge of the meson spectrum, we use neural networks and Gaussian processes to predict the masses of baryons with 90.3% and 96.6% accuracy, respectively. These results compare favourably to the constituent quark model. We as well predict the masses of pentaquarks and other exotic hadrons.

* 25 pages, 3 figures, 1 table 
Viaarxiv icon