Alert button
Picture for Lingxiao Li

Lingxiao Li

Alert button

Monitoring and Adapting ML Models on Mobile Devices

May 17, 2023
Wei Hao, Zixi Wang, Lauren Hong, Lingxiao Li, Nader Karayanni, Chengzhi Mao, Junfeng Yang, Asaf Cidon

Figure 1 for Monitoring and Adapting ML Models on Mobile Devices
Figure 2 for Monitoring and Adapting ML Models on Mobile Devices
Figure 3 for Monitoring and Adapting ML Models on Mobile Devices
Figure 4 for Monitoring and Adapting ML Models on Mobile Devices

ML models are increasingly being pushed to mobile devices, for low-latency inference and offline operation. However, once the models are deployed, it is hard for ML operators to track their accuracy, which can degrade unpredictably (e.g., due to data drift). We design the first end-to-end system for continuously monitoring and adapting models on mobile devices without requiring feedback from users. Our key observation is that often model degradation is due to a specific root cause, which may affect a large group of devices. Therefore, once the system detects a consistent degradation across a large number of devices, it employs a root cause analysis to determine the origin of the problem and applies a cause-specific adaptation. We evaluate the system on two computer vision datasets, and show it consistently boosts accuracy compared to existing approaches. On a dataset containing photos collected from driving cars, our system improves the accuracy on average by 15%.

Viaarxiv icon

Self-Consistent Velocity Matching of Probability Flows

Jan 31, 2023
Lingxiao Li, Samuel Hurault, Justin Solomon

Figure 1 for Self-Consistent Velocity Matching of Probability Flows
Figure 2 for Self-Consistent Velocity Matching of Probability Flows
Figure 3 for Self-Consistent Velocity Matching of Probability Flows
Figure 4 for Self-Consistent Velocity Matching of Probability Flows

We present a discretization-free scalable framework for solving a large class of mass-conserving partial differential equations (PDEs), including the time-dependent Fokker-Planck equation and the Wasserstein gradient flow. The main observation is that the time-varying velocity field of the PDE solution needs to be self-consistent: it must satisfy a fixed-point equation involving the flow characterized by the same velocity field. By parameterizing the flow as a time-dependent neural network, we propose an end-to-end iterative optimization framework called self-consistent velocity matching to solve this class of PDEs. Compared to existing approaches, our method does not suffer from temporal or spatial discretization, covers a wide range of PDEs, and scales to high dimensions. Experimentally, our method recovers analytical solutions accurately when they are available and achieves comparable or better performance in high dimensions with less training time compared to recent large-scale JKO-based methods that are designed for solving a more restrictive family of PDEs.

Viaarxiv icon

The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation

Nov 22, 2022
Lingxiao Li, Yi Zhang, Shuhui Wang

Figure 1 for The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation
Figure 2 for The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation
Figure 3 for The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation
Figure 4 for The Euclidean Space is Evil: Hyperbolic Attribute Editing for Few-shot Image Generation

Few-shot image generation is a challenging task since it aims to generate diverse new images for an unseen category with only a few images. Existing methods suffer from the trade-off between the quality and diversity of generated images. To tackle this problem, we propose Hyperbolic Attribute Editing (HAE), a simple yet effective method. Unlike other methods that work in Euclidean space, HAE captures the hierarchy among images using data from seen categories in hyperbolic space. Given a well-trained HAE, images of unseen categories can be generated by moving the latent code of a given image toward any meaningful directions in the Poincar\'e disk with a fixing radius. Most importantly, the hyperbolic space allows us to control the semantic diversity of the generated images by setting different radii in the disk. Extensive experiments and visualizations demonstrate that HAE is capable of not only generating images with promising quality and diversity using limited data but achieving a highly controllable and interpretable editing process.

Viaarxiv icon

Sampling with Mollified Interaction Energy Descent

Oct 24, 2022
Lingxiao Li, Qiang Liu, Anna Korba, Mikhail Yurochkin, Justin Solomon

Figure 1 for Sampling with Mollified Interaction Energy Descent
Figure 2 for Sampling with Mollified Interaction Energy Descent
Figure 3 for Sampling with Mollified Interaction Energy Descent
Figure 4 for Sampling with Mollified Interaction Energy Descent

Sampling from a target measure whose density is only known up to a normalization constant is a fundamental problem in computational statistics and machine learning. In this paper, we present a new optimization-based method for sampling called mollified interaction energy descent (MIED). MIED minimizes a new class of energies on probability measures called mollified interaction energies (MIEs). These energies rely on mollifier functions -- smooth approximations of the Dirac delta originated from PDE theory. We show that as the mollifier approaches the Dirac delta, the MIE converges to the chi-square divergence with respect to the target measure and the gradient flow of the MIE agrees with that of the chi-square divergence. Optimizing this energy with proper discretization yields a practical first-order particle-based algorithm for sampling in both unconstrained and constrained domains. We show experimentally that for unconstrained sampling problems our algorithm performs on par with existing particle-based algorithms like SVGD, while for constrained sampling problems our method readily incorporates constrained optimization techniques to handle more flexible constraints with strong performance compared to alternatives.

Viaarxiv icon

Wasserstein Iterative Networks for Barycenter Estimation

Jan 28, 2022
Alexander Korotin, Vage Egiazarian, Lingxiao Li, Evgeny Burnaev

Figure 1 for Wasserstein Iterative Networks for Barycenter Estimation
Figure 2 for Wasserstein Iterative Networks for Barycenter Estimation
Figure 3 for Wasserstein Iterative Networks for Barycenter Estimation
Figure 4 for Wasserstein Iterative Networks for Barycenter Estimation

Wasserstein barycenters have become popular due to their ability to represent the average of probability measures in a geometrically meaningful way. In this paper, we present an algorithm to approximate the Wasserstein-2 barycenters of continuous measures via a generative model. Previous approaches rely on regularization (entropic/quadratic) which introduces bias or on input convex neural networks which are not expressive enough for large-scale tasks. In contrast, our algorithm does not introduce bias and allows using arbitrary neural networks. In addition, based on the celebrity faces dataset, we construct Ave, celeba! dataset which can be used for quantitative evaluation of barycenter algorithms by using standard metrics of generative models such as FID.

Viaarxiv icon

Learning Proximal Operators to Discover Multiple Optima

Jan 28, 2022
Lingxiao Li, Noam Aigerman, Vladimir G. Kim, Jiajin Li, Kristjan Greenewald, Mikhail Yurochkin, Justin Solomon

Figure 1 for Learning Proximal Operators to Discover Multiple Optima
Figure 2 for Learning Proximal Operators to Discover Multiple Optima
Figure 3 for Learning Proximal Operators to Discover Multiple Optima
Figure 4 for Learning Proximal Operators to Discover Multiple Optima

Finding multiple solutions of non-convex optimization problems is a ubiquitous yet challenging task. Typical existing solutions either apply single-solution optimization methods from multiple random initial guesses or search in the vicinity of found solutions using ad hoc heuristics. We present an end-to-end method to learn the proximal operator across a family of non-convex problems, which can then be used to recover multiple solutions for unseen problems at test time. Our method only requires access to the objectives without needing the supervision of ground truth solutions. Notably, the added proximal regularization term elevates the convexity of our formulation: by applying recent theoretical results, we show that for weakly-convex objectives and under mild regularity conditions, training of the proximal operator converges globally in the over-parameterized setting. We further present a benchmark for multi-solution optimization including a wide range of applications and evaluate our method to demonstrate its effectiveness.

Viaarxiv icon

Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark

Jun 03, 2021
Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, Alexander Filippov, Evgeny Burnaev

Figure 1 for Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark
Figure 2 for Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark
Figure 3 for Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark
Figure 4 for Do Neural Optimal Transport Solvers Work? A Continuous Wasserstein-2 Benchmark

Despite the recent popularity of neural network-based solvers for optimal transport (OT), there is no standard quantitative way to evaluate their performance. In this paper, we address this issue for quadratic-cost transport -- specifically, computation of the Wasserstein-2 distance, a commonly-used formulation of optimal transport in machine learning. To overcome the challenge of computing ground truth transport maps between continuous measures needed to assess these solvers, we use input-convex neural networks (ICNN) to construct pairs of measures whose ground truth OT maps can be obtained analytically. This strategy yields pairs of continuous benchmark measures in high-dimensional spaces such as spaces of images. We thoroughly evaluate existing optimal transport solvers using these benchmark measures. Even though these solvers perform well in downstream tasks, many do not faithfully recover optimal transport maps. To investigate the cause of this discrepancy, we further test the solvers in a setting of image generation. Our study reveals crucial limitations of existing solvers and shows that increased OT accuracy does not necessarily correlate to better results downstream.

Viaarxiv icon

Large-Scale Wasserstein Gradient Flows

Jun 01, 2021
Petr Mokrov, Alexander Korotin, Lingxiao Li, Aude Genevay, Justin Solomon, Evgeny Burnaev

Figure 1 for Large-Scale Wasserstein Gradient Flows
Figure 2 for Large-Scale Wasserstein Gradient Flows
Figure 3 for Large-Scale Wasserstein Gradient Flows
Figure 4 for Large-Scale Wasserstein Gradient Flows

Wasserstein gradient flows provide a powerful means of understanding and solving many diffusion equations. Specifically, Fokker-Planck equations, which model the diffusion of probability measures, can be understood as gradient descent over entropy functionals in Wasserstein space. This equivalence, introduced by Jordan, Kinderlehrer and Otto, inspired the so-called JKO scheme to approximate these diffusion processes via an implicit discretization of the gradient flow in Wasserstein space. Solving the optimization problem associated to each JKO step, however, presents serious computational challenges. We introduce a scalable method to approximate Wasserstein gradient flows, targeted to machine learning applications. Our approach relies on input-convex neural networks (ICNNs) to discretize the JKO steps, which can be optimized by stochastic gradient descent. Unlike previous work, our method does not require domain discretization or particle simulation. As a result, we can sample from the measure at each time step of the diffusion and compute its probability density. We demonstrate our algorithm's performance by computing diffusions following the Fokker-Planck equation and apply it to unnormalized density sampling as well as nonlinear filtering.

Viaarxiv icon

Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization

Feb 02, 2021
Alexander Korotin, Lingxiao Li, Justin Solomon, Evgeny Burnaev

Figure 1 for Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization
Figure 2 for Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization
Figure 3 for Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization
Figure 4 for Continuous Wasserstein-2 Barycenter Estimation without Minimax Optimization

Wasserstein barycenters provide a geometric notion of the weighted average of probability measures based on optimal transport. In this paper, we present a scalable algorithm to compute Wasserstein-2 barycenters given sample access to the input measures, which are not restricted to being discrete. While past approaches rely on entropic or quadratic regularization, we employ input convex neural networks and cycle-consistency regularization to avoid introducing bias. As a result, our approach does not resort to minimax optimization. We provide theoretical analysis on error bounds as well as empirical evidence of the effectiveness of the proposed approach in low-dimensional qualitative scenarios and high-dimensional quantitative experiments.

Viaarxiv icon