Picture for Lalit Jain

Lalit Jain

Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning

Add code
Jun 15, 2024
Figure 1 for Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
Figure 2 for Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
Figure 3 for Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
Figure 4 for Humor in AI: Massive Scale Crowd-Sourced Preferences and Benchmarks for Cartoon Captioning
Viaarxiv icon

Adaptive Experimentation When You Can't Experiment

Add code
Jun 15, 2024
Figure 1 for Adaptive Experimentation When You Can't Experiment
Figure 2 for Adaptive Experimentation When You Can't Experiment
Figure 3 for Adaptive Experimentation When You Can't Experiment
Figure 4 for Adaptive Experimentation When You Can't Experiment
Viaarxiv icon

Off-Policy Evaluation from Logged Human Feedback

Add code
Jun 14, 2024
Viaarxiv icon

Best of Three Worlds: Adaptive Experimentation for Digital Marketing in Practice

Add code
Feb 26, 2024
Viaarxiv icon

DIRECT: Deep Active Learning under Imbalance and Label Noise

Add code
Dec 14, 2023
Figure 1 for DIRECT: Deep Active Learning under Imbalance and Label Noise
Figure 2 for DIRECT: Deep Active Learning under Imbalance and Label Noise
Figure 3 for DIRECT: Deep Active Learning under Imbalance and Label Noise
Viaarxiv icon

Fair Active Learning in Low-Data Regimes

Add code
Dec 13, 2023
Viaarxiv icon

Pessimistic Off-Policy Multi-Objective Optimization

Add code
Oct 28, 2023
Figure 1 for Pessimistic Off-Policy Multi-Objective Optimization
Figure 2 for Pessimistic Off-Policy Multi-Objective Optimization
Figure 3 for Pessimistic Off-Policy Multi-Objective Optimization
Figure 4 for Pessimistic Off-Policy Multi-Objective Optimization
Viaarxiv icon

Minimax Optimal Submodular Optimization with Bandit Feedback

Add code
Oct 27, 2023
Viaarxiv icon

Optimal Exploration is no harder than Thompson Sampling

Add code
Oct 24, 2023
Figure 1 for Optimal Exploration is no harder than Thompson Sampling
Figure 2 for Optimal Exploration is no harder than Thompson Sampling
Figure 3 for Optimal Exploration is no harder than Thompson Sampling
Figure 4 for Optimal Exploration is no harder than Thompson Sampling
Viaarxiv icon

A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity

Add code
Jul 27, 2023
Figure 1 for A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity
Figure 2 for A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity
Figure 3 for A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity
Figure 4 for A/B Testing and Best-arm Identification for Linear Bandits with Robustness to Non-stationarity
Viaarxiv icon