Alert button
Picture for Giacomo De Palma

Giacomo De Palma

Alert button

Quantum algorithms for group convolution, cross-correlation, and equivariant transformations

Sep 23, 2021
Grecia Castelazo, Quynh T. Nguyen, Giacomo De Palma, Dirk Englund, Seth Lloyd, Bobak T. Kiani

Figure 1 for Quantum algorithms for group convolution, cross-correlation, and equivariant transformations
Figure 2 for Quantum algorithms for group convolution, cross-correlation, and equivariant transformations
Figure 3 for Quantum algorithms for group convolution, cross-correlation, and equivariant transformations
Figure 4 for Quantum algorithms for group convolution, cross-correlation, and equivariant transformations

Group convolutions and cross-correlations, which are equivariant to the actions of group elements, are commonly used in mathematics to analyze or take advantage of symmetries inherent in a given problem setting. Here, we provide efficient quantum algorithms for performing linear group convolutions and cross-correlations on data stored as quantum states. Runtimes for our algorithms are logarithmic in the dimension of the group thus offering an exponential speedup compared to classical algorithms when input data is provided as a quantum state and linear operations are well conditioned. Motivated by the rich literature on quantum algorithms for solving algebraic problems, our theoretical framework opens a path for quantizing many algorithms in machine learning and numerical methods that employ group operations.

Viaarxiv icon

Quantum Earth Mover's Distance: A New Approach to Learning Quantum Data

Jan 08, 2021
Bobak Toussi Kiani, Giacomo De Palma, Milad Marvian, Zi-Wen Liu, Seth Lloyd

Figure 1 for Quantum Earth Mover's Distance: A New Approach to Learning Quantum Data
Figure 2 for Quantum Earth Mover's Distance: A New Approach to Learning Quantum Data
Figure 3 for Quantum Earth Mover's Distance: A New Approach to Learning Quantum Data
Figure 4 for Quantum Earth Mover's Distance: A New Approach to Learning Quantum Data

Quantifying how far the output of a learning algorithm is from its target is an essential task in machine learning. However, in quantum settings, the loss landscapes of commonly used distance metrics often produce undesirable outcomes such as poor local minima and exponentially decaying gradients. As a new approach, we consider here the quantum earth mover's (EM) or Wasserstein-1 distance, recently proposed in [De Palma et al., arXiv:2009.04469] as a quantum analog to the classical EM distance. We show that the quantum EM distance possesses unique properties, not found in other commonly used quantum distance metrics, that make quantum learning more stable and efficient. We propose a quantum Wasserstein generative adversarial network (qWGAN) which takes advantage of the quantum EM distance and provides an efficient means of performing learning on quantum data. Our qWGAN requires resources polynomial in the number of qubits, and our numerical experiments demonstrate that it is capable of learning a diverse set of quantum data.

Viaarxiv icon

Adversarial robustness guarantees for random deep neural networks

Apr 13, 2020
Giacomo De Palma, Bobak T. Kiani, Seth Lloyd

Figure 1 for Adversarial robustness guarantees for random deep neural networks
Figure 2 for Adversarial robustness guarantees for random deep neural networks
Figure 3 for Adversarial robustness guarantees for random deep neural networks
Figure 4 for Adversarial robustness guarantees for random deep neural networks

The reliability of most deep learning algorithms is fundamentally challenged by the existence of adversarial examples, which are incorrectly classified inputs that are extremely close to a correctly classified input. We study adversarial examples for deep neural networks with random weights and biases and prove that the $\ell^1$ distance of any given input from the classification boundary scales at least as $\sqrt{n}$, where $n$ is the dimension of the input. We also extend our proof to cover all the $\ell^p$ norms. Our results constitute a fundamental advance in the study of adversarial examples, and encompass a wide variety of architectures, which include any combination of convolutional or fully connected layers with skipped connections and pooling. We validate our results with experiments on both random deep neural networks and deep neural networks trained on the MNIST and CIFAR10 datasets. Given the results of our experiments on MNIST and CIFAR10, we conjecture that the proof of our adversarial robustness guarantee can be extended to trained deep neural networks. This extension will open the way to a thorough theoretical study of neural network robustness by classifying the relation between network architecture and adversarial distance.

Viaarxiv icon

Deep neural networks are biased towards simple functions

Dec 25, 2018
Giacomo De Palma, Bobak Toussi Kiani, Seth Lloyd

Figure 1 for Deep neural networks are biased towards simple functions
Figure 2 for Deep neural networks are biased towards simple functions
Figure 3 for Deep neural networks are biased towards simple functions
Figure 4 for Deep neural networks are biased towards simple functions

We prove that the binary classifiers of bit strings generated by random wide deep neural networks are biased towards simple functions. The simplicity is captured by the following two properties. For any given input bit string, the average Hamming distance of the closest input bit string with a different classification is at least $\sqrt{n\left/\left(2\pi\ln n\right)\right.}$, where $n$ is the length of the string. Moreover, if the bits of the initial string are flipped randomly, the average number of flips required to change the classification grows linearly with $n$. On the contrary, for a uniformly random binary classifier, the average Hamming distance of the closest input bit string with a different classification is one, and the average number of random flips required to change the classification is two. These results are confirmed by numerical experiments on deep neural networks with two hidden layers, and settle the conjecture stating that random deep neural networks are biased towards simple functions. The conjecture that random deep neural networks are biased towards simple functions was proposed and numerically explored in [Valle P\'erez et al., arXiv:1805.08522] to explain the unreasonably good generalization properties of deep learning algorithms. By providing a precise characterization of the form of this bias towards simplicity, our results open the way to a rigorous proof of the generalization properties of deep learning algorithms in real-world scenarios.

Viaarxiv icon