Picture for Ievgen Redko

Ievgen Redko

Leveraging Generic Time Series Foundation Models for EEG Classification

Add code
Oct 31, 2025
Viaarxiv icon

Time Series Representations for Classification Lie Hidden in Pretrained Vision Transformers

Add code
Jun 10, 2025
Viaarxiv icon

Mantis: Lightweight Calibrated Foundation Model for User-Friendly Time Series Classification

Add code
Feb 21, 2025
Viaarxiv icon

Zero-shot Model-based Reinforcement Learning using Large Language Models

Add code
Oct 15, 2024
Figure 1 for Zero-shot Model-based Reinforcement Learning using Large Language Models
Figure 2 for Zero-shot Model-based Reinforcement Learning using Large Language Models
Figure 3 for Zero-shot Model-based Reinforcement Learning using Large Language Models
Figure 4 for Zero-shot Model-based Reinforcement Learning using Large Language Models
Viaarxiv icon

Large Language Models as Markov Chains

Add code
Oct 03, 2024
Figure 1 for Large Language Models as Markov Chains
Figure 2 for Large Language Models as Markov Chains
Figure 3 for Large Language Models as Markov Chains
Figure 4 for Large Language Models as Markov Chains
Viaarxiv icon

Can LLMs predict the convergence of Stochastic Gradient Descent?

Add code
Aug 03, 2024
Viaarxiv icon

Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting

Add code
Jun 14, 2024
Figure 1 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Figure 2 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Figure 3 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Figure 4 for Analysing Multi-Task Regression via Random Matrix Theory with Application to Time Series Forecasting
Viaarxiv icon

Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention

Add code
Feb 19, 2024
Figure 1 for Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
Figure 2 for Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
Figure 3 for Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
Figure 4 for Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention
Viaarxiv icon

Characterising Gradients for Unsupervised Accuracy Estimation under Distribution Shift

Add code
Jan 17, 2024
Figure 1 for Characterising Gradients for Unsupervised Accuracy Estimation under Distribution Shift
Figure 2 for Characterising Gradients for Unsupervised Accuracy Estimation under Distribution Shift
Figure 3 for Characterising Gradients for Unsupervised Accuracy Estimation under Distribution Shift
Figure 4 for Characterising Gradients for Unsupervised Accuracy Estimation under Distribution Shift
Viaarxiv icon

Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias

Add code
Oct 26, 2023
Figure 1 for Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias
Figure 2 for Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias
Figure 3 for Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias
Figure 4 for Leveraging Ensemble Diversity for Robust Self-Training in the Presence of Sample Selection Bias
Viaarxiv icon