Picture for Ningyuan Chen

Ningyuan Chen

Comparing Exploration-Exploitation Strategies of LLMs and Humans: Insights from Standard Multi-armed Bandit Tasks

Add code
May 15, 2025
Viaarxiv icon

Reinforcement Learning for Intensity Control: An Application to Choice-Based Network Revenue Management

Add code
Jun 08, 2024
Viaarxiv icon

Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls

Add code
Jun 04, 2024
Figure 1 for Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls
Figure 2 for Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls
Figure 3 for Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls
Figure 4 for Contextual Optimization under Covariate Shift: A Robust Approach by Intersecting Wasserstein Balls
Viaarxiv icon

No Algorithmic Collusion in Two-Player Blindfolded Game with Thompson Sampling

Add code
May 23, 2024
Figure 1 for No Algorithmic Collusion in Two-Player Blindfolded Game with Thompson Sampling
Figure 2 for No Algorithmic Collusion in Two-Player Blindfolded Game with Thompson Sampling
Viaarxiv icon

Allocating Divisible Resources on Arms with Unknown and Random Rewards

Add code
Jun 28, 2023
Viaarxiv icon

Algorithmic Decision-Making Safeguarded by Human Knowledge

Add code
Nov 20, 2022
Viaarxiv icon

Learning Consumer Preferences from Bundle Sales Data

Add code
Sep 11, 2022
Figure 1 for Learning Consumer Preferences from Bundle Sales Data
Figure 2 for Learning Consumer Preferences from Bundle Sales Data
Figure 3 for Learning Consumer Preferences from Bundle Sales Data
Viaarxiv icon

Bridging Adversarial and Nonstationary Multi-armed Bandit

Add code
Jan 05, 2022
Figure 1 for Bridging Adversarial and Nonstationary Multi-armed Bandit
Figure 2 for Bridging Adversarial and Nonstationary Multi-armed Bandit
Figure 3 for Bridging Adversarial and Nonstationary Multi-armed Bandit
Figure 4 for Bridging Adversarial and Nonstationary Multi-armed Bandit
Viaarxiv icon

Debiasing Samples from Online Learning Using Bootstrap

Add code
Jul 31, 2021
Figure 1 for Debiasing Samples from Online Learning Using Bootstrap
Figure 2 for Debiasing Samples from Online Learning Using Bootstrap
Figure 3 for Debiasing Samples from Online Learning Using Bootstrap
Figure 4 for Debiasing Samples from Online Learning Using Bootstrap
Viaarxiv icon

Sublinear Regret for Learning POMDPs

Add code
Jul 14, 2021
Figure 1 for Sublinear Regret for Learning POMDPs
Viaarxiv icon