Picture for Suho Shin

Suho Shin

Tokenized Bandit for LLM Decoding and Alignment

Add code
Jun 08, 2025
Figure 1 for Tokenized Bandit for LLM Decoding and Alignment
Figure 2 for Tokenized Bandit for LLM Decoding and Alignment
Figure 3 for Tokenized Bandit for LLM Decoding and Alignment
Figure 4 for Tokenized Bandit for LLM Decoding and Alignment
Viaarxiv icon

Ad Auctions for LLMs via Retrieval Augmented Generation

Add code
Jun 12, 2024
Figure 1 for Ad Auctions for LLMs via Retrieval Augmented Generation
Figure 2 for Ad Auctions for LLMs via Retrieval Augmented Generation
Figure 3 for Ad Auctions for LLMs via Retrieval Augmented Generation
Figure 4 for Ad Auctions for LLMs via Retrieval Augmented Generation
Viaarxiv icon

Dueling Over Dessert, Mastering the Art of Repeated Cake Cutting

Add code
Feb 18, 2024
Figure 1 for Dueling Over Dessert, Mastering the Art of Repeated Cake Cutting
Figure 2 for Dueling Over Dessert, Mastering the Art of Repeated Cake Cutting
Figure 3 for Dueling Over Dessert, Mastering the Art of Repeated Cake Cutting
Figure 4 for Dueling Over Dessert, Mastering the Art of Repeated Cake Cutting
Viaarxiv icon

Replication-proof Bandit Mechanism Design

Add code
Dec 28, 2023
Figure 1 for Replication-proof Bandit Mechanism Design
Viaarxiv icon

Robust and Performance Incentivizing Algorithms for Multi-Armed Bandits with Strategic Agents

Add code
Dec 13, 2023
Figure 1 for Robust and Performance Incentivizing Algorithms for Multi-Armed Bandits with Strategic Agents
Viaarxiv icon

Online Advertisements with LLMs: Opportunities and Challenges

Add code
Nov 11, 2023
Figure 1 for Online Advertisements with LLMs: Opportunities and Challenges
Figure 2 for Online Advertisements with LLMs: Opportunities and Challenges
Figure 3 for Online Advertisements with LLMs: Opportunities and Challenges
Viaarxiv icon

An Improved Relaxation for Oracle-Efficient Adversarial Contextual Bandits

Add code
Oct 29, 2023
Viaarxiv icon

Regret Analysis of Repeated Delegated Choice

Add code
Oct 10, 2023
Figure 1 for Regret Analysis of Repeated Delegated Choice
Viaarxiv icon

Bandit Social Learning: Exploration under Myopic Behavior

Add code
Feb 15, 2023
Viaarxiv icon

Multi-armed Bandit Algorithm against Strategic Replication

Add code
Oct 23, 2021
Figure 1 for Multi-armed Bandit Algorithm against Strategic Replication
Figure 2 for Multi-armed Bandit Algorithm against Strategic Replication
Viaarxiv icon