Alert button
Picture for Aleksandrs Slivkins

Aleksandrs Slivkins

Alert button

Can large language models explore in-context?

Add code
Bookmark button
Alert button
Mar 22, 2024
Akshay Krishnamurthy, Keegan Harris, Dylan J. Foster, Cyril Zhang, Aleksandrs Slivkins

Viaarxiv icon

Impact of Decentralized Learning on Player Utilities in Stackelberg Games

Add code
Bookmark button
Alert button
Feb 29, 2024
Kate Donahue, Nicole Immorlica, Meena Jagadeesan, Brendan Lucier, Aleksandrs Slivkins

Figure 1 for Impact of Decentralized Learning on Player Utilities in Stackelberg Games
Figure 2 for Impact of Decentralized Learning on Player Utilities in Stackelberg Games
Figure 3 for Impact of Decentralized Learning on Player Utilities in Stackelberg Games
Figure 4 for Impact of Decentralized Learning on Player Utilities in Stackelberg Games
Viaarxiv icon

Incentivized Exploration via Filtered Posterior Sampling

Add code
Bookmark button
Alert button
Feb 20, 2024
Anand Kalvit, Aleksandrs Slivkins, Yonatan Gur

Viaarxiv icon

Robust and Performance Incentivizing Algorithms for Multi-Armed Bandits with Strategic Agents

Add code
Bookmark button
Alert button
Dec 13, 2023
Seyed A. Esmaeili, Suho Shin, Aleksandrs Slivkins

Viaarxiv icon

Algorithmic Persuasion Through Simulation: Information Design in the Age of Generative AI

Add code
Bookmark button
Alert button
Nov 29, 2023
Keegan Harris, Nicole Immorlica, Brendan Lucier, Aleksandrs Slivkins

Viaarxiv icon

Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits

Add code
Bookmark button
Alert button
Jun 13, 2023
Lequn Wang, Akshay Krishnamurthy, Aleksandrs Slivkins

Figure 1 for Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits
Figure 2 for Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits
Figure 3 for Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits
Figure 4 for Oracle-Efficient Pessimism: Offline Policy Optimization in Contextual Bandits
Viaarxiv icon

Bandit Social Learning: Exploration under Myopic Behavior

Add code
Bookmark button
Alert button
Feb 15, 2023
Kiarash Banihashem, MohammadTaghi Hajiaghayi, Suho Shin, Aleksandrs Slivkins

Viaarxiv icon

Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics

Add code
Bookmark button
Alert button
Jan 30, 2023
Brendan Lucier, Sarath Pattathil, Aleksandrs Slivkins, Mengxiao Zhang

Figure 1 for Autobidders with Budget and ROI Constraints: Efficiency, Regret, and Pacing Dynamics
Viaarxiv icon

Efficient Contextual Bandits with Knapsacks via Regression

Add code
Bookmark button
Alert button
Nov 14, 2022
Aleksandrs Slivkins, Dylan Foster

Viaarxiv icon

Incentivizing Combinatorial Bandit Exploration

Add code
Bookmark button
Alert button
Jun 01, 2022
Xinyan Hu, Dung Daniel Ngo, Aleksandrs Slivkins, Zhiwei Steven Wu

Viaarxiv icon