We provide a novel Neural Network architecture that can: i) output R-matrix for a given quantum integrable spin chain, ii) search for an integrable Hamiltonian and the corresponding R-matrix under assumptions of certain symmetries or other restrictions, iii) explore the space of Hamiltonians around already learned models and reconstruct the family of integrable spin chains which they belong to. The neural network training is done by minimizing loss functions encoding Yang-Baxter equation, regularity and other model-specific restrictions such as hermiticity. Holomorphy is implemented via the choice of activation functions. We demonstrate the work of our Neural Network on the two-dimensional spin chains of difference form. In particular, we reconstruct the R-matrices for all 14 classes. We also demonstrate its utility as an \textit{Explorer}, scanning a certain subspace of Hamiltonians and identifying integrable classes after clusterisation. The last strategy can be used in future to carve out the map of integrable spin chains in higher dimensions and in more general settings where no analytical methods are available.
We review recent work in machine learning aspects of conformal field theory and Lie algebra representation theory using neural networks.
We propose a novel approach toward the vacuum degeneracy problem of the string landscape, by finding an efficient measure of similarity amongst compactification scenarios. Using a class of some one million Calabi-Yau manifolds as concrete examples, the paradigm of few-shot machine-learning and Siamese Neural Networks represents them as points in R(3) where the similarity score between two manifolds is the Euclidean distance between their R(3) representatives. Using these methods, we can compress the search space for exceedingly rare manifolds to within one percent of the original data by training on only a few hundred data points. We also demonstrate how these methods may be applied to characterize `typicality' for vacuum representatives.
Classical and exceptional Lie algebras and their representations are among the most important tools in the analysis of symmetry in physical systems. In this letter we show how the computation of tensor products and branching rules of irreducible representations are machine-learnable, and can achieve relative speed-ups of orders of magnitude in comparison to the non-ML algorithms.
We study the ensemble of a product of n complex Gaussian i.i.d. matrices. We find this ensemble is Gaussian with a variance matrix which is averaged over a multi-Wishart ensemble. We compute the mixed moments and find that at large $N$, they are given by an enumeration of non-crossing pairings weighted by Fuss-Catalan numbers.