Alert button
Picture for Yuting Wei

Yuting Wei

Alert button

Approximate message passing from random initialization with applications to $\mathbb{Z}_{2}$ synchronization

Add code
Bookmark button
Alert button
Feb 07, 2023
Gen Li, Wei Fan, Yuting Wei

Figure 1 for Approximate message passing from random initialization with applications to $\mathbb{Z}_{2}$ synchronization
Viaarxiv icon

Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a Generative Model

Add code
Bookmark button
Alert button
Aug 22, 2022
Gen Li, Yuejie Chi, Yuting Wei, Yuxin Chen

Viaarxiv icon

A Non-Asymptotic Framework for Approximate Message Passing in Spiked Models

Add code
Bookmark button
Alert button
Aug 05, 2022
Gen Li, Yuting Wei

Figure 1 for A Non-Asymptotic Framework for Approximate Message Passing in Spiked Models
Viaarxiv icon

Mitigating multiple descents: A model-agnostic framework for risk monotonization

Add code
Bookmark button
Alert button
May 25, 2022
Pratik Patil, Arun Kumar Kuchibhotla, Yuting Wei, Alessandro Rinaldo

Figure 1 for Mitigating multiple descents: A model-agnostic framework for risk monotonization
Figure 2 for Mitigating multiple descents: A model-agnostic framework for risk monotonization
Figure 3 for Mitigating multiple descents: A model-agnostic framework for risk monotonization
Figure 4 for Mitigating multiple descents: A model-agnostic framework for risk monotonization
Viaarxiv icon

Settling the Sample Complexity of Model-Based Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 11, 2022
Gen Li, Laixi Shi, Yuxin Chen, Yuejie Chi, Yuting Wei

Figure 1 for Settling the Sample Complexity of Model-Based Offline Reinforcement Learning
Figure 2 for Settling the Sample Complexity of Model-Based Offline Reinforcement Learning
Viaarxiv icon

Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity

Add code
Bookmark button
Alert button
Feb 28, 2022
Laixi Shi, Gen Li, Yuting Wei, Yuxin Chen, Yuejie Chi

Figure 1 for Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity
Figure 2 for Pessimistic Q-Learning for Offline Reinforcement Learning: Towards Optimal Sample Complexity
Viaarxiv icon

Minimum $\ell_{1}$-norm interpolators: Precise asymptotics and multiple descent

Add code
Bookmark button
Alert button
Oct 18, 2021
Yue Li, Yuting Wei

Figure 1 for Minimum $\ell_{1}$-norm interpolators: Precise asymptotics and multiple descent
Figure 2 for Minimum $\ell_{1}$-norm interpolators: Precise asymptotics and multiple descent
Figure 3 for Minimum $\ell_{1}$-norm interpolators: Precise asymptotics and multiple descent
Figure 4 for Minimum $\ell_{1}$-norm interpolators: Precise asymptotics and multiple descent
Viaarxiv icon

Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization

Add code
Bookmark button
Alert button
May 31, 2021
Shicong Cen, Yuting Wei, Yuejie Chi

Figure 1 for Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization
Figure 2 for Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization
Figure 3 for Fast Policy Extragradient Methods for Competitive Games with Entropy Regularization
Viaarxiv icon

Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting

Add code
Bookmark button
Alert button
May 17, 2021
Gen Li, Yuxin Chen, Yuejie Chi, Yuantao Gu, Yuting Wei

Figure 1 for Sample-Efficient Reinforcement Learning Is Feasible for Linearly Realizable MDPs with Limited Revisiting
Viaarxiv icon

Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis

Add code
Bookmark button
Alert button
Mar 16, 2021
Gen Li, Changxiao Cai, Yuxin Chen, Yuantao Gu, Yuting Wei, Yuejie Chi

Figure 1 for Is Q-Learning Minimax Optimal? A Tight Sample Complexity Analysis
Viaarxiv icon