Alert button
Picture for Philippe Rigollet

Philippe Rigollet

Alert button

Covariance alignment: from maximum likelihood estimation to Gromov-Wasserstein

Nov 22, 2023
Yanjun Han, Philippe Rigollet, George Stepaniants

Feature alignment methods are used in many scientific disciplines for data pooling, annotation, and comparison. As an instance of a permutation learning problem, feature alignment presents significant statistical and computational challenges. In this work, we propose the covariance alignment model to study and compare various alignment methods and establish a minimax lower bound for covariance alignment that has a non-standard dimension scaling because of the presence of a nuisance parameter. This lower bound is in fact minimax optimal and is achieved by a natural quasi MLE. However, this estimator involves a search over all permutations which is computationally infeasible even when the problem has moderate size. To overcome this limitation, we show that the celebrated Gromov-Wasserstein algorithm from optimal transport which is more amenable to fast implementation even on large-scale problems is also minimax optimal. These results give the first statistical justification for the deployment of the Gromov-Wasserstein algorithm in practice.

* 41 pages, 2 figures 
Viaarxiv icon

Optimal transport for automatic alignment of untargeted metabolomic data

Jun 05, 2023
Marie Breeur, George Stepaniants, Pekka Keski-Rahkonen, Philippe Rigollet, Vivian Viallon

Figure 1 for Optimal transport for automatic alignment of untargeted metabolomic data
Figure 2 for Optimal transport for automatic alignment of untargeted metabolomic data
Figure 3 for Optimal transport for automatic alignment of untargeted metabolomic data
Figure 4 for Optimal transport for automatic alignment of untargeted metabolomic data

Untargeted metabolomic profiling through liquid chromatography-mass spectrometry (LC-MS) measures a vast array of metabolites within biospecimens, advancing drug development, disease diagnosis, and risk prediction. However, the low throughput of LC-MS poses a major challenge for biomarker discovery, annotation, and experimental comparison, necessitating the merging of multiple datasets. Current data pooling methods encounter practical limitations due to their vulnerability to data variations and hyperparameter dependence. Here we introduce GromovMatcher, a flexible and user-friendly algorithm that automatically combines LC-MS datasets using optimal transport. By capitalizing on feature intensity correlation structures, GromovMatcher delivers superior alignment accuracy and robustness compared to existing approaches. This algorithm scales to thousands of features requiring minimal hyperparameter tuning. Applying our method to experimental patient studies of liver and pancreatic cancer, we discover shared metabolic features related to patient alcohol intake, demonstrating how GromovMatcher facilitates the search for biomarkers associated with lifestyle risk factors linked to several cancer types.

* 41 pages, 11 figures 
Viaarxiv icon

The emergence of clusters in self-attention dynamics

May 17, 2023
Borjan Geshkovski, Cyril Letrouit, Yury Polyanskiy, Philippe Rigollet

Figure 1 for The emergence of clusters in self-attention dynamics
Figure 2 for The emergence of clusters in self-attention dynamics
Figure 3 for The emergence of clusters in self-attention dynamics
Figure 4 for The emergence of clusters in self-attention dynamics

Viewing Transformers as interacting particle systems, we describe the geometry of learned representations when the weights are not time dependent. We show that particles, representing tokens, tend to cluster toward particular limiting objects as time tends to infinity. Cluster locations are determined by the initial tokens, confirming context-awareness of representations learned by Transformers. Using techniques from dynamical systems and partial differential equations, we show that the type of limiting object that emerges depends on the spectrum of the value matrix. Additionally, in the one-dimensional case we prove that the self-attention matrix converges to a low-rank Boolean matrix. The combination of these results mathematically confirms the empirical observation made by Vaswani et al. [VSP'17] that leaders appear in a sequence of tokens when processed by Transformers.

Viaarxiv icon

Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow

Jan 04, 2023
Yuling Yan, Kaizheng Wang, Philippe Rigollet

Figure 1 for Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow
Figure 2 for Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow
Figure 3 for Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow
Figure 4 for Learning Gaussian Mixtures Using the Wasserstein-Fisher-Rao Gradient Flow

Gaussian mixture models form a flexible and expressive parametric family of distributions that has found applications in a wide variety of applications. Unfortunately, fitting these models to data is a notoriously hard problem from a computational perspective. Currently, only moment-based methods enjoy theoretical guarantees while likelihood-based methods are dominated by heuristics such as Expectation-Maximization that are known to fail in simple examples. In this work, we propose a new algorithm to compute the nonparametric maximum likelihood estimator (NPMLE) in a Gaussian mixture model. Our method is based on gradient descent over the space of probability measures equipped with the Wasserstein-Fisher-Rao geometry for which we establish convergence guarantees. In practice, it can be approximated using an interacting particle system where the weight and location of particles are updated alternately. We conduct extensive numerical experiments to confirm the effectiveness of the proposed algorithm compared not only to classical benchmarks but also to similar gradient descent algorithms with respect to simpler geometries. In particular, these simulations illustrate the benefit of updating both weight and location of the interacting particles.

Viaarxiv icon

GULP: a prediction-based metric between representations

Oct 12, 2022
Enric Boix-Adsera, Hannah Lawrence, George Stepaniants, Philippe Rigollet

Figure 1 for GULP: a prediction-based metric between representations
Figure 2 for GULP: a prediction-based metric between representations
Figure 3 for GULP: a prediction-based metric between representations
Figure 4 for GULP: a prediction-based metric between representations

Comparing the representations learned by different neural networks has recently emerged as a key tool to understand various architectures and ultimately optimize them. In this work, we introduce GULP, a family of distance measures between representations that is explicitly motivated by downstream predictive tasks. By construction, GULP provides uniform control over the difference in prediction performance between two representations, with respect to regularized linear prediction tasks. Moreover, it satisfies several desirable structural properties, such as the triangle inequality and invariance under orthogonal transformations, and thus lends itself to data embedding and visualization. We extensively evaluate GULP relative to other methods, and demonstrate that it correctly differentiates between architecture families, converges over the course of training, and captures generalization performance on downstream linear tasks.

* 34 pages, 24 figures, to appear in NeurIPS'22 
Viaarxiv icon

Variational inference via Wasserstein gradient flows

May 31, 2022
Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, Philippe Rigollet

Figure 1 for Variational inference via Wasserstein gradient flows
Figure 2 for Variational inference via Wasserstein gradient flows
Figure 3 for Variational inference via Wasserstein gradient flows
Figure 4 for Variational inference via Wasserstein gradient flows

Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI) has emerged as a central computational approach to large-scale Bayesian inference. Rather than sampling from the true posterior $\pi$, VI aims at producing a simple but effective approximation $\hat \pi$ to $\pi$ for which summary statistics are easy to compute. However, unlike the well-studied MCMC methodology, VI is still poorly understood and dominated by heuristics. In this work, we propose principled methods for VI, in which $\hat \pi$ is taken to be a Gaussian or a mixture of Gaussians, which rest upon the theory of gradient flows on the Bures-Wasserstein space of Gaussian measures. Akin to MCMC, it comes with strong theoretical guarantees when $\pi$ is log-concave.

* 52 pages, 15 figures 
Viaarxiv icon

An algorithmic solution to the Blotto game using multi-marginal couplings

Feb 15, 2022
Vianney Perchet, Philippe Rigollet, Thibaut Le Gouic

Figure 1 for An algorithmic solution to the Blotto game using multi-marginal couplings
Figure 2 for An algorithmic solution to the Blotto game using multi-marginal couplings

We describe an efficient algorithm to compute solutions for the general two-player Blotto game on n battlefields with heterogeneous values. While explicit constructions for such solutions have been limited to specific, largely symmetric or homogeneous, setups, this algorithmic resolution covers the most general situation to date: value-asymmetric game with asymmetric budget. The proposed algorithm rests on recent theoretical advances regarding Sinkhorn iterations for matrix and tensor scaling. An important case which had been out of reach of previous attempts is that of heterogeneous but symmetric battlefield values with asymmetric budget. In this case, the Blotto game is constant-sum so optimal solutions exist, and our algorithm samples from an \eps-optimal solution in time O(n^2 + \eps^{-4}), independently of budgets and battlefield values. In the case of asymmetric values where optimal solutions need not exist but Nash equilibria do, our algorithm samples from an \eps-Nash equilibrium with similar complexity but where implicit constants depend on various parameters of the game such as battlefield values.

Viaarxiv icon

Gaussian Determinantal Processes: a new model for directionality in data

Nov 19, 2021
Subhro Ghosh, Philippe Rigollet

Figure 1 for Gaussian Determinantal Processes: a new model for directionality in data
Figure 2 for Gaussian Determinantal Processes: a new model for directionality in data
Figure 3 for Gaussian Determinantal Processes: a new model for directionality in data
Figure 4 for Gaussian Determinantal Processes: a new model for directionality in data

Determinantal point processes (a.k.a. DPPs) have recently become popular tools for modeling the phenomenon of negative dependence, or repulsion, in data. However, our understanding of an analogue of a classical parametric statistical theory is rather limited for this class of models. In this work, we investigate a parametric family of Gaussian DPPs with a clearly interpretable effect of parametric modulation on the observed points. We show that parameter modulation impacts the observed points by introducing directionality in their repulsion structure, and the principal directions correspond to the directions of maximal (i.e. the most long ranged) dependency. This model readily yields a novel and viable alternative to Principal Component Analysis (PCA) as a dimension reduction tool that favors directions along which the data is most spread out. This methodological contribution is complemented by a statistical analysis of a spiked model similar to that employed for covariance matrices as a framework to study PCA. These theoretical investigations unveil intriguing questions for further examination in random matrix theory, stochastic geometry and related topics.

* Proceedings of the National Academy of Sciences 117, no. 24 (2020): 13207-13213  
* Published in the Proceedings of the National Academy of Sciences (Direct Submission) 
Viaarxiv icon

Multi-Reference Alignment for sparse signals, Uniform Uncertainty Principles and the Beltway Problem

Jun 24, 2021
Subhro Ghosh, Philippe Rigollet

Motivated by cutting-edge applications like cryo-electron microscopy (cryo-EM), the Multi-Reference Alignment (MRA) model entails the learning of an unknown signal from repeated measurements of its images under the latent action of a group of isometries and additive noise of magnitude $\sigma$. Despite significant interest, a clear picture for understanding rates of estimation in this model has emerged only recently, particularly in the high-noise regime $\sigma \gg 1$ that is highly relevant in applications. Recent investigations have revealed a remarkable asymptotic sample complexity of order $\sigma^6$ for certain signals whose Fourier transforms have full support, in stark contrast to the traditional $\sigma^2$ that arise in regular models. Often prohibitively large in practice, these results have prompted the investigation of variations around the MRA model where better sample complexity may be achieved. In this paper, we show that \emph{sparse} signals exhibit an intermediate $\sigma^4$ sample complexity even in the classical MRA model. Our results explore and exploit connections of the MRA estimation problem with two classical topics in applied mathematics: the \textit{beltway problem} from combinatorial optimization, and \textit{uniform uncertainty principles} from harmonic analysis.

Viaarxiv icon