Alert button
Picture for Hamish Flynn

Hamish Flynn

Alert button

Tighter Confidence Bounds for Sequential Kernel Regression

Add code
Bookmark button
Alert button
Mar 19, 2024
Hamish Flynn, David Reeb

Figure 1 for Tighter Confidence Bounds for Sequential Kernel Regression
Figure 2 for Tighter Confidence Bounds for Sequential Kernel Regression
Figure 3 for Tighter Confidence Bounds for Sequential Kernel Regression
Figure 4 for Tighter Confidence Bounds for Sequential Kernel Regression
Viaarxiv icon

Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures

Add code
Bookmark button
Alert button
Sep 27, 2023
Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters

Figure 1 for Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures
Figure 2 for Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures
Figure 3 for Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures
Figure 4 for Improved Algorithms for Stochastic Linear Bandits Using Tail Bounds for Martingale Mixtures
Viaarxiv icon

PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison

Add code
Bookmark button
Alert button
Nov 29, 2022
Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters

Figure 1 for PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison
Figure 2 for PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison
Figure 3 for PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison
Figure 4 for PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison
Viaarxiv icon

PAC-Bayesian Lifelong Learning For Multi-Armed Bandits

Add code
Bookmark button
Alert button
Mar 07, 2022
Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters

Figure 1 for PAC-Bayesian Lifelong Learning For Multi-Armed Bandits
Figure 2 for PAC-Bayesian Lifelong Learning For Multi-Armed Bandits
Figure 3 for PAC-Bayesian Lifelong Learning For Multi-Armed Bandits
Figure 4 for PAC-Bayesian Lifelong Learning For Multi-Armed Bandits
Viaarxiv icon