Alert button
Picture for Lin F. Yang

Lin F. Yang

Alert button

Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms

Add code
Bookmark button
Alert button
Nov 08, 2022
Osama A. Hanna, Lin F. Yang, Christina Fragouli

Figure 1 for Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms
Viaarxiv icon

Near-Optimal Sample Complexity Bounds for Constrained MDPs

Add code
Bookmark button
Alert button
Jun 13, 2022
Sharan Vaswani, Lin F. Yang, Csaba Szepesvári

Figure 1 for Near-Optimal Sample Complexity Bounds for Constrained MDPs
Figure 2 for Near-Optimal Sample Complexity Bounds for Constrained MDPs
Viaarxiv icon

Learning in Distributed Contextual Linear Bandits Without Sharing the Context

Add code
Bookmark button
Alert button
Jun 08, 2022
Osama A. Hanna, Lin F. Yang, Christina Fragouli

Viaarxiv icon

Provably Efficient Lifelong Reinforcement Learning with Linear Function Approximation

Add code
Bookmark button
Alert button
Jun 01, 2022
Sanae Amani, Lin F. Yang, Ching-An Cheng

Viaarxiv icon

Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost

Add code
Bookmark button
Alert button
May 26, 2022
Sanae Amani, Tor Lattimore, András György, Lin F. Yang

Figure 1 for Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost
Figure 2 for Distributed Contextual Linear Bandits with Minimax Optimal Communication Cost
Viaarxiv icon

Solving Multi-Arm Bandit Using a Few Bits of Communication

Add code
Bookmark button
Alert button
Nov 11, 2021
Osama A. Hanna, Lin F. Yang, Christina Fragouli

Figure 1 for Solving Multi-Arm Bandit Using a Few Bits of Communication
Figure 2 for Solving Multi-Arm Bandit Using a Few Bits of Communication
Figure 3 for Solving Multi-Arm Bandit Using a Few Bits of Communication
Figure 4 for Solving Multi-Arm Bandit Using a Few Bits of Communication
Viaarxiv icon

Settling the Horizon-Dependence of Sample Complexity in Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 01, 2021
Yuanzhi Li, Ruosong Wang, Lin F. Yang

Viaarxiv icon

Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs

Add code
Bookmark button
Alert button
Oct 26, 2021
Han Zhong, Jiayi Huang, Lin F. Yang, Liwei Wang

Figure 1 for Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs
Figure 2 for Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs
Figure 3 for Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs
Figure 4 for Breaking the Moments Condition Barrier: No-Regret Algorithm for Bandits with Super Heavy-Tailed Payoffs
Viaarxiv icon

Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration

Add code
Bookmark button
Alert button
Oct 12, 2021
Weichao Mao, Tamer Başar, Lin F. Yang, Kaiqing Zhang

Figure 1 for Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration
Figure 2 for Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration
Figure 3 for Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration
Figure 4 for Decentralized Cooperative Multi-Agent Reinforcement Learning with Exploration
Viaarxiv icon

Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation

Add code
Bookmark button
Alert button
Oct 09, 2021
Junhong Shen, Lin F. Yang

Figure 1 for Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation
Figure 2 for Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation
Figure 3 for Theoretically Principled Deep RL Acceleration via Nearest Neighbor Function Approximation
Viaarxiv icon