Alert button
Picture for D. Belomestny

D. Belomestny

Alert button

UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms

Add code
Bookmark button
Alert button
Jun 03, 2021
D. Belomestny, I. Levin, E. Moulines, A. Naumov, S. Samsonov, V. Zorina

Figure 1 for UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms
Figure 2 for UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms
Figure 3 for UVIP: Model-Free Approach to Evaluate Reinforcement Learning Algorithms
Viaarxiv icon

Variance reduction for Markov chains with application to MCMC

Add code
Bookmark button
Alert button
Oct 08, 2019
D. Belomestny, L. Iosipoi, E. Moulines, A. Naumov, S. Samsonov

Figure 1 for Variance reduction for Markov chains with application to MCMC
Figure 2 for Variance reduction for Markov chains with application to MCMC
Figure 3 for Variance reduction for Markov chains with application to MCMC
Figure 4 for Variance reduction for Markov chains with application to MCMC
Viaarxiv icon

Variance reduction for MCMC methods via martingale representations

Add code
Bookmark button
Alert button
Mar 18, 2019
D. Belomestny, E. Moulines, N. Shagadatov, M. Urusov

Figure 1 for Variance reduction for MCMC methods via martingale representations
Figure 2 for Variance reduction for MCMC methods via martingale representations
Viaarxiv icon

Variance reduction via empirical variance minimization: convergence and complexity

Add code
Bookmark button
Alert button
Apr 02, 2018
D. Belomestny, L. Iosipoi, N. Zhivotovskiy

Figure 1 for Variance reduction via empirical variance minimization: convergence and complexity
Figure 2 for Variance reduction via empirical variance minimization: convergence and complexity
Figure 3 for Variance reduction via empirical variance minimization: convergence and complexity
Figure 4 for Variance reduction via empirical variance minimization: convergence and complexity
Viaarxiv icon