Alert button
Picture for Alexander Korotin

Alexander Korotin

Alert button

Building the Bridge of Schrödinger: A Continuous Entropic Optimal Transport Benchmark

Jun 16, 2023
Nikita Gushchin, Alexander Kolesov, Petr Mokrov, Polina Karpikova, Andrey Spiridonov, Evgeny Burnaev, Alexander Korotin

Figure 1 for Building the Bridge of Schrödinger: A Continuous Entropic Optimal Transport Benchmark
Figure 2 for Building the Bridge of Schrödinger: A Continuous Entropic Optimal Transport Benchmark
Figure 3 for Building the Bridge of Schrödinger: A Continuous Entropic Optimal Transport Benchmark
Figure 4 for Building the Bridge of Schrödinger: A Continuous Entropic Optimal Transport Benchmark

Over the last several years, there has been a significant progress in developing neural solvers for the Schr\"odinger Bridge (SB) problem and applying them to generative modeling. This new research field is justifiably fruitful as it is interconnected with the practically well-performing diffusion models and theoretically-grounded entropic optimal transport (EOT). Still the area lacks non-trivial tests allowing a researcher to understand how well do the methods solve SB or its equivalent continuous EOT problem. We fill this gap and propose a novel way to create pairs of probability distributions for which the ground truth OT solution in known by the construction. Our methodology is generic and works for a wide range of OT formulations, in particular, it covers the EOT which is equivalent to SB (the main interest of our study). This development allows us to create continuous benchmark distributions with the known EOT and SB solution on high-dimensional spaces such as spaces of images. As an illustration, we use these benchmark pairs to test how well do existing neural EOT/SB solvers actually compute the EOT solution. The benchmark is available via the link: https://github.com/ngushchin/EntropicOTBenchmark.

Viaarxiv icon

Energy-guided Entropic Neural Optimal Transport

Apr 12, 2023
Petr Mokrov, Alexander Korotin, Evgeny Burnaev

Figure 1 for Energy-guided Entropic Neural Optimal Transport
Figure 2 for Energy-guided Entropic Neural Optimal Transport
Figure 3 for Energy-guided Entropic Neural Optimal Transport
Figure 4 for Energy-guided Entropic Neural Optimal Transport

Energy-Based Models (EBMs) are known in the Machine Learning community for the decades. Since the seminal works devoted to EBMs dating back to the noughties there have been appearing a lot of efficient methods which solve the generative modelling problem by means of energy potentials (unnormalized likelihood functions). In contrast, the realm of Optimal Transport (OT) and, in particular, neural OT solvers is much less explored and limited by few recent works (excluding WGAN based approaches which utilize OT as a loss function and do not model OT maps themselves). In our work, we bridge the gap between EBMs and Entropy-regularized OT. We present the novel methodology which allows utilizing the recent developments and technical improvements of the former in order to enrich the latter. We validate the applicability of our method on toy 2D scenarios as well as standard unpaired image-to-image translation problems. For the sake of simplicity, we choose simple short- and long- run EBMs as a backbone of our Energy-guided Entropic OT method, leaving the application of more sophisticated EBMs for future research.

Viaarxiv icon

Partial Neural Optimal Transport

Mar 14, 2023
Milena Gazdieva, Alexander Korotin, Evgeny Burnaev

Figure 1 for Partial Neural Optimal Transport
Figure 2 for Partial Neural Optimal Transport
Figure 3 for Partial Neural Optimal Transport

We propose a novel neural method to compute partial optimal transport (OT) maps, i.e., OT maps between parts of measures of the specified masses. We test our partial neural optimal transport algorithm on synthetic examples.

Viaarxiv icon

Neural Gromov-Wasserstein Optimal Transport

Mar 10, 2023
Maksim Nekrashevich, Alexander Korotin, Evgeny Burnaev

Figure 1 for Neural Gromov-Wasserstein Optimal Transport
Figure 2 for Neural Gromov-Wasserstein Optimal Transport
Figure 3 for Neural Gromov-Wasserstein Optimal Transport
Figure 4 for Neural Gromov-Wasserstein Optimal Transport

We present a scalable neural method to solve the Gromov-Wasserstein (GW) Optimal Transport (OT) problem with the inner product cost. In this problem, given two distributions supported on (possibly different) spaces, one has to find the most isometric map between them. Our proposed approach uses neural networks and stochastic mini-batch optimization which allows to overcome the limitations of existing GW methods such as their poor scalability with the number of samples and the lack of out-of-sample estimation. To demonstrate the effectiveness of our proposed method, we conduct experiments on the synthetic data and explore the practical applicability of our method to the popular task of the unsupervised alignment of word embeddings.

Viaarxiv icon

Extremal Domain Translation with Neural Optimal Transport

Jan 30, 2023
Milena Gazdieva, Alexander Korotin, Daniil Selikhanovych, Evgeny Burnaev

Figure 1 for Extremal Domain Translation with Neural Optimal Transport
Figure 2 for Extremal Domain Translation with Neural Optimal Transport
Figure 3 for Extremal Domain Translation with Neural Optimal Transport
Figure 4 for Extremal Domain Translation with Neural Optimal Transport

We propose the extremal transport (ET) which is a mathematical formalization of the theoretically best possible unpaired translation between a pair of domains w.r.t. the given similarity function. Inspired by the recent advances in neural optimal transport (OT), we propose a scalable algorithm to approximate ET maps as a limit of partial OT maps. We test our algorithm on toy examples and on the unpaired image-to-image translation task.

Viaarxiv icon

Entropic Neural Optimal Transport via Diffusion Processes

Nov 02, 2022
Nikita Gushchin, Alexander Kolesov, Alexander Korotin, Dmitry Vetrov, Evgeny Burnaev

Figure 1 for Entropic Neural Optimal Transport via Diffusion Processes
Figure 2 for Entropic Neural Optimal Transport via Diffusion Processes
Figure 3 for Entropic Neural Optimal Transport via Diffusion Processes
Figure 4 for Entropic Neural Optimal Transport via Diffusion Processes

We propose a novel neural algorithm for the fundamental problem of computing the entropic optimal transport (EOT) plan between probability distributions which are accessible by samples. Our algorithm is based on the saddle point reformulation of the dynamic version of EOT which is known as the Schr\"odinger Bridge problem. In contrast to the prior methods for large-scale EOT, our algorithm is end-to-end and consists of a single learning step, has fast inference procedure, and allows handling small values of the entropy regularization coefficient which is of particular importance in some applied problems. Empirically, we show the performance of the method on several large-scale EOT tasks.

Viaarxiv icon

Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?

Jun 15, 2022
Alexander Korotin, Alexander Kolesov, Evgeny Burnaev

Figure 1 for Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?
Figure 2 for Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?
Figure 3 for Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?
Figure 4 for Kantorovich Strikes Back! Wasserstein GANs are not Optimal Transport?

Wasserstein Generative Adversarial Networks (WGANs) are the popular generative models built on the theory of Optimal Transport (OT) and the Kantorovich duality. Despite the success of WGANs, it is still unclear how well the underlying OT dual solvers approximate the OT cost (Wasserstein-1 distance, $\mathbb{W}_{1}$) and the OT gradient needed to update the generator. In this paper, we address these questions. We construct 1-Lipschitz functions and use them to build ray monotone transport plans. This strategy yields pairs of continuous benchmark distributions with the analytically known OT plan, OT cost and OT gradient in high-dimensional spaces such as spaces of images. We thoroughly evaluate popular WGAN dual form solvers (gradient penalty, spectral normalization, entropic regularization, etc.) using these benchmark pairs. Even though these solvers perform well in WGANs, none of them faithfully compute $\mathbb{W}_{1}$ in high dimensions. Nevertheless, many provide a meaningful approximation of the OT gradient. These observations suggest that these solvers should not be treated as good estimators of $\mathbb{W}_{1}$, but to some extent they indeed can be used in variational problems requiring the minimization of $\mathbb{W}_{1}$.

Viaarxiv icon

Connecting adversarial attacks and optimal transport for domain adaptation

Jun 04, 2022
Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov

Figure 1 for Connecting adversarial attacks and optimal transport for domain adaptation
Figure 2 for Connecting adversarial attacks and optimal transport for domain adaptation
Figure 3 for Connecting adversarial attacks and optimal transport for domain adaptation
Figure 4 for Connecting adversarial attacks and optimal transport for domain adaptation

We present a novel algorithm for domain adaptation using optimal transport. In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain. In our method, we use optimal transport to map target samples to the domain named source fiction. This domain differs from the source but is accurately classified by the source domain classifier. Our main idea is to generate a source fiction by c-cyclically monotone transformation over the target domain. If samples with the same labels in two domains are c-cyclically monotone, the optimal transport map between these domains preserves the class-wise structure, which is the main goal of domain adaptation. To generate a source fiction domain, we propose an algorithm that is based on our finding that adversarial attacks are a c-cyclically monotone transformation of the dataset. We conduct experiments on Digits and Modern Office-31 datasets and achieve improvement in performance for simple discrete optimal transport solvers for all adaptation tasks.

Viaarxiv icon

Neural Optimal Transport with General Cost Functionals

May 30, 2022
Arip Asadulaev, Alexander Korotin, Vage Egiazarian, Evgeny Burnaev

Figure 1 for Neural Optimal Transport with General Cost Functionals
Figure 2 for Neural Optimal Transport with General Cost Functionals
Figure 3 for Neural Optimal Transport with General Cost Functionals
Figure 4 for Neural Optimal Transport with General Cost Functionals

We present a novel neural-networks-based algorithm to compute optimal transport (OT) plans and maps for general cost functionals. The algorithm is based on a saddle point reformulation of the OT problem and generalizes prior OT methods for weak and strong cost functionals. As an application, we construct a functional to map data distributions with preserving the class-wise structure of data.

Viaarxiv icon