Alert button
Picture for Jiafei Lyu

Jiafei Lyu

Alert button

SEABO: A Simple Search-Based Method for Offline Imitation Learning

Add code
Bookmark button
Alert button
Feb 06, 2024
Jiafei Lyu, Xiaoteng Ma, Le Wan, Runze Liu, Xiu Li, Zongqing Lu

Viaarxiv icon

Understanding What Affects Generalization Gap in Visual Reinforcement Learning: Theory and Empirical Evidence

Add code
Bookmark button
Alert button
Feb 05, 2024
Jiafei Lyu, Le Wan, Xiu Li, Zongqing Lu

Viaarxiv icon

Exploration and Anti-Exploration with Distributional Random Network Distillation

Add code
Bookmark button
Alert button
Jan 25, 2024
Kai Yang, Jian Tao, Jiafei Lyu, Xiu Li

Viaarxiv icon

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

Add code
Bookmark button
Alert button
Nov 23, 2023
Kai Yang, Jian Tao, Jiafei Lyu, Chunjiang Ge, Jiaxin Chen, Qimai Li, Weihan Shen, Xiaolong Zhu, Xiu Li

Viaarxiv icon

The primacy bias in Model-based RL

Add code
Bookmark button
Alert button
Oct 23, 2023
Zhongjian Qiao, Jiafei Lyu, Xiu Li

Figure 1 for The primacy bias in Model-based RL
Figure 2 for The primacy bias in Model-based RL
Figure 3 for The primacy bias in Model-based RL
Figure 4 for The primacy bias in Model-based RL
Viaarxiv icon

Zero-shot Preference Learning for Offline RL via Optimal Transport

Add code
Bookmark button
Alert button
Jun 06, 2023
Runze Liu, Yali Du, Fengshuo Bai, Jiafei Lyu, Xiu Li

Figure 1 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Figure 2 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Figure 3 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Figure 4 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Viaarxiv icon

Normalization Enhances Generalization in Visual Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 01, 2023
Lu Li, Jiafei Lyu, Guozheng Ma, Zilin Wang, Zhenjie Yang, Xiu Li, Zhiheng Li

Figure 1 for Normalization Enhances Generalization in Visual Reinforcement Learning
Figure 2 for Normalization Enhances Generalization in Visual Reinforcement Learning
Figure 3 for Normalization Enhances Generalization in Visual Reinforcement Learning
Figure 4 for Normalization Enhances Generalization in Visual Reinforcement Learning
Viaarxiv icon

Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse

Add code
Bookmark button
Alert button
May 29, 2023
Jiafei Lyu, Le Wan, Zongqing Lu, Xiu Li

Figure 1 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Figure 2 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Figure 3 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Figure 4 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Viaarxiv icon

Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Apr 10, 2023
Junjie Zhang, Jiafei Lyu, Xiaoteng Ma, Jiangpeng Yan, Jun Yang, Le Wan, Xiu Li

Figure 1 for Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning
Figure 2 for Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning
Figure 3 for Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning
Figure 4 for Uncertainty-driven Trajectory Truncation for Model-based Offline Reinforcement Learning
Viaarxiv icon

State Advantage Weighting for Offline RL

Add code
Bookmark button
Alert button
Oct 09, 2022
Jiafei Lyu, Aicheng Gong, Le Wan, Zongqing Lu, Xiu Li

Figure 1 for State Advantage Weighting for Offline RL
Figure 2 for State Advantage Weighting for Offline RL
Figure 3 for State Advantage Weighting for Offline RL
Figure 4 for State Advantage Weighting for Offline RL
Viaarxiv icon