Abstract:The fact that we can build models from data, and therefore refine our models with more data from experiments, is usually given for granted in scientific inquiry. However, how much information can we extract, and how precise can we expect our learned model to be, if we have only a finite amount of data at our disposal? Nuclear physics demands an high degree of precision from models that are inferred from the limited number of nuclei that can be possibly made in the laboratories. In manuscript I will introduce some concepts of computational science, such as statistical theory of learning and Hamiltonian complexity, and use them to contextualise the results concerning the amount of data necessary to extrapolate a mass model to a given precision.




Abstract:In this paper we develop a quantum optimization algorithm and use it to solve the bundle adjustment problem with a simulated quantum computer. Bundle adjustment is the process of optimizing camera poses and sensor properties to best reconstruct the three-dimensional structure and viewing parameters. This problem is often solved using some implementation of the Levenberg--Marquardt algorithm. In this case we implement a quantum algorithm for solving the linear system of normal equations that calculates the optimization step in Levenberg--Marquardt. This procedure is the current bottleneck in the algorithmic complexity of bundle adjustment. The proposed quantum algorithm dramatically reduces the complexity of this operation with respect to the number of points. We investigate 9 configurations of a toy-model for bundle adjustment, limited to 10 points and 2 cameras. This optimization problem is solved both by using the sparse Levenberg-Marquardt algorithm and our quantum implementation. The resulting solutions are presented, showing an improved rate of convergence, together with an analysis of the theoretical speed up and the probability of running the algorithm successfully on a current quantum computer. The presented quantum algorithm is a seminal implementation of using quantum computing algorithms in order to solve complex optimization problems in computer vision, in particular bundle adjustment, which offers several avenues of further investigations.




Abstract:After more than 80 years from the seminal work of Weizs\"acker and the liquid drop model of the atomic nucleus, theoretical errors over nuclear masses ($\sim$ MeV) are order of magnitudes larger than experimental ones ($\lesssim$ keV). Predicting the mass of atomic nuclei is with precision is extremely challenging due to the non--trivial many--body interplay of protons and neutrons in nuclei, and the complex nature of the nuclear strong force. This paper argues that the arduous development of nuclear physics in the passed century is due to the exploration of a system on the limit of the knowledgeable, defined within the statistical theory of learning.