Alert button
Picture for Alexander Panfilov

Alexander Panfilov

Alert button

Provable Compositional Generalization for Object-Centric Learning

Oct 09, 2023
Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel

Learning representations that generalize to novel compositions of known concepts is crucial for bridging the gap between human and machine perception. One prominent effort is learning object-centric representations, which are widely conjectured to enable compositional generalization. Yet, it remains unclear when this conjecture will be true, as a principled theoretical or empirical understanding of compositional generalization is lacking. In this work, we investigate when compositional generalization is guaranteed for object-centric representations through the lens of identifiability theory. We show that autoencoders that satisfy structural assumptions on the decoder and enforce encoder-decoder consistency will learn object-centric representations that provably generalize compositionally. We validate our theoretical result and highlight the practical relevance of our assumptions through experiments on synthetic image data.

* The first four authors contributed equally 
Viaarxiv icon

Multi-step domain adaptation by adversarial attack to $\mathcal{H} Δ\mathcal{H}$-divergence

Jul 18, 2022
Arip Asadulaev, Alexander Panfilov, Andrey Filchenkov

Figure 1 for Multi-step domain adaptation by adversarial attack to $\mathcal{H} Δ\mathcal{H}$-divergence
Figure 2 for Multi-step domain adaptation by adversarial attack to $\mathcal{H} Δ\mathcal{H}$-divergence

Adversarial examples are transferable between different models. In our paper, we propose to use this property for multi-step domain adaptation. In unsupervised domain adaptation settings, we demonstrate that replacing the source domain with adversarial examples to $\mathcal{H} \Delta \mathcal{H}$-divergence can improve source classifier accuracy on the target domain. Our method can be connected to most domain adaptation techniques. We conducted a range of experiments and achieved improvement in accuracy on Digits and Office-Home datasets.

Viaarxiv icon

Easy Batch Normalization

Jul 18, 2022
Arip Asadulaev, Alexander Panfilov, Andrey Filchenkov

Figure 1 for Easy Batch Normalization
Figure 2 for Easy Batch Normalization
Figure 3 for Easy Batch Normalization

It was shown that adversarial examples improve object recognition. But what about their opposite side, easy examples? Easy examples are samples that the machine learning model classifies correctly with high confidence. In our paper, we are making the first step toward exploring the potential benefits of using easy examples in the training procedure of neural networks. We propose to use an auxiliary batch normalization for easy examples for the standard and robust accuracy improvement.

Viaarxiv icon

Connecting adversarial attacks and optimal transport for domain adaptation

Jun 04, 2022
Arip Asadulaev, Vitaly Shutov, Alexander Korotin, Alexander Panfilov, Andrey Filchenkov

Figure 1 for Connecting adversarial attacks and optimal transport for domain adaptation
Figure 2 for Connecting adversarial attacks and optimal transport for domain adaptation
Figure 3 for Connecting adversarial attacks and optimal transport for domain adaptation
Figure 4 for Connecting adversarial attacks and optimal transport for domain adaptation

We present a novel algorithm for domain adaptation using optimal transport. In domain adaptation, the goal is to adapt a classifier trained on the source domain samples to the target domain. In our method, we use optimal transport to map target samples to the domain named source fiction. This domain differs from the source but is accurately classified by the source domain classifier. Our main idea is to generate a source fiction by c-cyclically monotone transformation over the target domain. If samples with the same labels in two domains are c-cyclically monotone, the optimal transport map between these domains preserves the class-wise structure, which is the main goal of domain adaptation. To generate a source fiction domain, we propose an algorithm that is based on our finding that adversarial attacks are a c-cyclically monotone transformation of the dataset. We conduct experiments on Digits and Modern Office-31 datasets and achieve improvement in performance for simple discrete optimal transport solvers for all adaptation tasks.

Viaarxiv icon