Alert button
Picture for Omar Besbes

Omar Besbes

Alert button

Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss

Add code
Bookmark button
Alert button
Feb 16, 2023
Omar Besbes, Will Ma, Omar Mouchtaki

Figure 1 for Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss
Figure 2 for Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss
Figure 3 for Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss
Figure 4 for Quality vs. Quantity of Data in Contextual Decision-Making: Exact Analysis under Newsvendor Loss
Viaarxiv icon

Beyond IID: data-driven decision-making in heterogeneous environments

Add code
Bookmark button
Alert button
Jun 20, 2022
Omar Besbes, Will Ma, Omar Mouchtaki

Figure 1 for Beyond IID: data-driven decision-making in heterogeneous environments
Figure 2 for Beyond IID: data-driven decision-making in heterogeneous environments
Viaarxiv icon

Contextual Inverse Optimization: Offline and Online Learning

Add code
Bookmark button
Alert button
Jun 26, 2021
Omar Besbes, Yuri Fonseca, Ilan Lobel

Figure 1 for Contextual Inverse Optimization: Offline and Online Learning
Figure 2 for Contextual Inverse Optimization: Offline and Online Learning
Figure 3 for Contextual Inverse Optimization: Offline and Online Learning
Figure 4 for Contextual Inverse Optimization: Offline and Online Learning
Viaarxiv icon

Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with Non-stationary Rewards

Add code
Bookmark button
Alert button
May 13, 2014
Omar Besbes, Yonatan Gur, Assaf Zeevi

Figure 1 for Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with Non-stationary Rewards
Figure 2 for Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with Non-stationary Rewards
Figure 3 for Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with Non-stationary Rewards
Figure 4 for Optimal Exploration-Exploitation in a Multi-Armed-Bandit Problem with Non-stationary Rewards
Viaarxiv icon