Get our free extension to see links to code for papers anywhere online!

Chrome logo  Add to Chrome

Firefox logo Add to Firefox

Top $K$ Ranking for Multi-Armed Bandit with Noisy Evaluations



Evrard Garcelon , Vashist Avadhanula , Alessandro Lazaric , Matteo Pirotta


   Access Paper or Ask Questions

QUEST: Queue Simulation for Content Moderation at Scale



Rahul Makhijani , Parikshit Shah , Vashist Avadhanula , Caner Gocmen , Nicolás E. Stier-Moses , Julián Mestre


   Access Paper or Ask Questions

Stochastic Bandits for Multi-platform Budget Optimization in Online Advertising



Vashist Avadhanula , Riccardo Colini-Baldeschi , Stefano Leonardi , Karthik Abinav Sankararaman , Okke Schrijvers


   Access Paper or Ask Questions

Improved Optimistic Algorithm For The Multinomial Logit Contextual Bandit



Priyank Agrawal , Vashist Avadhanula , Theja Tulabandhula

* 25 pages 

   Access Paper or Ask Questions

Multi-armed Bandits with Cost Subsidy



Deeksha Sinha , Karthik Abinav Sankararama , Abbas Kazerouni , Vashist Avadhanula


   Access Paper or Ask Questions

Thompson Sampling for Contextual Bandit Problems with Auxiliary Safety Constraints



Samuel Daulton , Shaun Singh , Vashist Avadhanula , Drew Dimmery , Eytan Bakshy

* To appear at NeurIPS 2019, Workshop on Safety and Robustness in Decision Making. 11 pages (including references and appendix) 

   Access Paper or Ask Questions

Thompson Sampling for the MNL-Bandit



Shipra Agrawal , Vashist Avadhanula , Vineet Goyal , Assaf Zeevi

* Accepted for presentation at Conference on Learning Theory (COLT) 2017 

   Access Paper or Ask Questions

MNL-Bandit: A Dynamic Learning Approach to Assortment Selection



Shipra Agrawal , Vashist Avadhanula , Vineet Goyal , Assaf Zeevi


   Access Paper or Ask Questions