Picture for Ievgen Redko

Ievgen Redko

Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting

Add code
Jun 14, 2024
Figure 1 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Figure 2 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Figure 3 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Figure 4 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Viaarxiv icon

Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention

Add code
Feb 19, 2024
Viaarxiv icon

Characterising Gradients for Unsupervised Accuracy Estimation under Distribution Shift

Add code
Jan 17, 2024
Viaarxiv icon

Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias

Add code
Oct 26, 2023
Viaarxiv icon

Understanding deep neural networks through the lens of their non-linearity

Add code
Oct 17, 2023
Figure 1 for Understanding deep neural networks through the lens of their non-linearity
Figure 2 for Understanding deep neural networks through the lens of their non-linearity
Figure 3 for Understanding deep neural networks through the lens of their non-linearity
Figure 4 for Understanding deep neural networks through the lens of their non-linearity
Viaarxiv icon

Revisiting invariances and introducing priors in Gromov-Wasserstein distances

Add code
Jul 19, 2023
Figure 1 for Revisiting invariances and introducing priors in Gromov-Wasserstein distances
Figure 2 for Revisiting invariances and introducing priors in Gromov-Wasserstein distances
Figure 3 for Revisiting invariances and introducing priors in Gromov-Wasserstein distances
Figure 4 for Revisiting invariances and introducing priors in Gromov-Wasserstein distances
Viaarxiv icon

Beyond invariant representation learning: linearly alignable latent spaces for efficient closed-form domain adaptation

Add code
May 12, 2023
Figure 1 for Beyond invariant representation learning: linearly alignable latent spaces for efficient closed-form domain adaptation
Figure 2 for Beyond invariant representation learning: linearly alignable latent spaces for efficient closed-form domain adaptation
Figure 3 for Beyond invariant representation learning: linearly alignable latent spaces for efficient closed-form domain adaptation
Figure 4 for Beyond invariant representation learning: linearly alignable latent spaces for efficient closed-form domain adaptation
Viaarxiv icon

Meta Optimal Transport

Add code
Jun 10, 2022
Figure 1 for Meta Optimal Transport
Figure 2 for Meta Optimal Transport
Figure 3 for Meta Optimal Transport
Figure 4 for Meta Optimal Transport
Viaarxiv icon

Unbalanced CO-Optimal Transport

Add code
May 31, 2022
Figure 1 for Unbalanced CO-Optimal Transport
Figure 2 for Unbalanced CO-Optimal Transport
Figure 3 for Unbalanced CO-Optimal Transport
Figure 4 for Unbalanced CO-Optimal Transport
Viaarxiv icon

Factored couplings in multi-marginal optimal transport via difference of convex programming

Add code
Oct 18, 2021
Figure 1 for Factored couplings in multi-marginal optimal transport via difference of convex programming
Figure 2 for Factored couplings in multi-marginal optimal transport via difference of convex programming
Figure 3 for Factored couplings in multi-marginal optimal transport via difference of convex programming
Figure 4 for Factored couplings in multi-marginal optimal transport via difference of convex programming
Viaarxiv icon