Picture for Junya Honda

Junya Honda

A Further Efficient Algorithm with Best-of-Both-Worlds Guarantees for $m$-Set Semi-Bandit Problem

Add code
Mar 12, 2026
Viaarxiv icon

Note on Follow-the-Perturbed-Leader in Combinatorial Semi-Bandit Problems

Add code
Jun 14, 2025
Figure 1 for Note on Follow-the-Perturbed-Leader in Combinatorial Semi-Bandit Problems
Figure 2 for Note on Follow-the-Perturbed-Leader in Combinatorial Semi-Bandit Problems
Viaarxiv icon

Optimal Regret of Bernoulli Bandits under Global Differential Privacy

Add code
May 08, 2025
Viaarxiv icon

Multi-Player Approaches for Dueling Bandits

Add code
May 25, 2024
Figure 1 for Multi-Player Approaches for Dueling Bandits
Figure 2 for Multi-Player Approaches for Dueling Bandits
Figure 3 for Multi-Player Approaches for Dueling Bandits
Figure 4 for Multi-Player Approaches for Dueling Bandits
Viaarxiv icon

Learning with Posterior Sampling for Revenue Management under Time-varying Demand

Add code
May 08, 2024
Figure 1 for Learning with Posterior Sampling for Revenue Management under Time-varying Demand
Figure 2 for Learning with Posterior Sampling for Revenue Management under Time-varying Demand
Figure 3 for Learning with Posterior Sampling for Revenue Management under Time-varying Demand
Figure 4 for Learning with Posterior Sampling for Revenue Management under Time-varying Demand
Viaarxiv icon

Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds

Add code
Mar 10, 2024
Figure 1 for Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds
Figure 2 for Adaptive Learning Rate for Follow-the-Regularized-Leader: Competitive Analysis and Best-of-Both-Worlds
Viaarxiv icon

Follow-the-Perturbed-Leader with Fréchet-type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds

Add code
Mar 08, 2024
Figure 1 for Follow-the-Perturbed-Leader with Fréchet-type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds
Figure 2 for Follow-the-Perturbed-Leader with Fréchet-type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds
Figure 3 for Follow-the-Perturbed-Leader with Fréchet-type Tail Distributions: Optimality in Adversarial Bandits and Best-of-Both-Worlds
Viaarxiv icon

Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring

Add code
Feb 13, 2024
Figure 1 for Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring
Figure 2 for Exploration by Optimization with Hybrid Regularizers: Logarithmic Regret with Adversarial Robustness in Partial Monitoring
Viaarxiv icon

Thompson Exploration with Best Challenger Rule in Best Arm Identification

Add code
Oct 01, 2023
Figure 1 for Thompson Exploration with Best Challenger Rule in Best Arm Identification
Figure 2 for Thompson Exploration with Best Challenger Rule in Best Arm Identification
Figure 3 for Thompson Exploration with Best Challenger Rule in Best Arm Identification
Figure 4 for Thompson Exploration with Best Challenger Rule in Best Arm Identification
Viaarxiv icon

Stability-penalty-adaptive Follow-the-regularized-leader: Sparsity, Game-dependency, and Best-of-both-worlds

Add code
May 26, 2023
Figure 1 for Stability-penalty-adaptive Follow-the-regularized-leader: Sparsity, Game-dependency, and Best-of-both-worlds
Figure 2 for Stability-penalty-adaptive Follow-the-regularized-leader: Sparsity, Game-dependency, and Best-of-both-worlds
Figure 3 for Stability-penalty-adaptive Follow-the-regularized-leader: Sparsity, Game-dependency, and Best-of-both-worlds
Viaarxiv icon