Picture for Cindy Trinh

Cindy Trinh

ENS Paris Saclay

Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information

Add code
Mar 24, 2021
Figure 1 for Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information
Figure 2 for Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information
Figure 3 for Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information
Figure 4 for Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information
Viaarxiv icon

A High Performance, Low Complexity Algorithm for Multi-Player Bandits Without Collision Sensing Information

Add code
Feb 19, 2021
Figure 1 for A High Performance, Low Complexity Algorithm for Multi-Player Bandits Without Collision Sensing Information
Figure 2 for A High Performance, Low Complexity Algorithm for Multi-Player Bandits Without Collision Sensing Information
Figure 3 for A High Performance, Low Complexity Algorithm for Multi-Player Bandits Without Collision Sensing Information
Figure 4 for A High Performance, Low Complexity Algorithm for Multi-Player Bandits Without Collision Sensing Information
Viaarxiv icon

MLPerf Mobile Inference Benchmark: Why Mobile AI Benchmarking Is Hard and What to Do About It

Add code
Dec 03, 2020
Figure 1 for MLPerf Mobile Inference Benchmark: Why Mobile AI Benchmarking Is Hard and What to Do About It
Figure 2 for MLPerf Mobile Inference Benchmark: Why Mobile AI Benchmarking Is Hard and What to Do About It
Figure 3 for MLPerf Mobile Inference Benchmark: Why Mobile AI Benchmarking Is Hard and What to Do About It
Figure 4 for MLPerf Mobile Inference Benchmark: Why Mobile AI Benchmarking Is Hard and What to Do About It
Viaarxiv icon

Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling

Add code
Dec 06, 2019
Figure 1 for Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling
Figure 2 for Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling
Figure 3 for Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling
Viaarxiv icon