Picture for Jiafei Lyu

Jiafei Lyu

World Models with Hints of Large Language Models for Goal Achieving

Jun 11, 2024
Viaarxiv icon

Cross-Domain Policy Adaptation by Capturing Representation Mismatch

Add code
May 24, 2024
Viaarxiv icon

SEABO: A Simple Search-Based Method for Offline Imitation Learning

Add code
Feb 06, 2024
Viaarxiv icon

Understanding What Affects Generalization Gap in Visual Reinforcement Learning: Theory and Empirical Evidence

Add code
Feb 05, 2024
Viaarxiv icon

Exploration and Anti-Exploration with Distributional Random Network Distillation

Add code
Jan 25, 2024
Figure 1 for Exploration and Anti-Exploration with Distributional Random Network Distillation
Figure 2 for Exploration and Anti-Exploration with Distributional Random Network Distillation
Figure 3 for Exploration and Anti-Exploration with Distributional Random Network Distillation
Figure 4 for Exploration and Anti-Exploration with Distributional Random Network Distillation
Viaarxiv icon

Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model

Add code
Nov 23, 2023
Viaarxiv icon

The primacy bias in Model-based RL

Oct 23, 2023
Figure 1 for The primacy bias in Model-based RL
Figure 2 for The primacy bias in Model-based RL
Figure 3 for The primacy bias in Model-based RL
Figure 4 for The primacy bias in Model-based RL
Viaarxiv icon

Zero-shot Preference Learning for Offline RL via Optimal Transport

Add code
Jun 06, 2023
Figure 1 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Figure 2 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Figure 3 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Figure 4 for Zero-shot Preference Learning for Offline RL via Optimal Transport
Viaarxiv icon

Normalization Enhances Generalization in Visual Reinforcement Learning

Jun 01, 2023
Figure 1 for Normalization Enhances Generalization in Visual Reinforcement Learning
Figure 2 for Normalization Enhances Generalization in Visual Reinforcement Learning
Figure 3 for Normalization Enhances Generalization in Visual Reinforcement Learning
Figure 4 for Normalization Enhances Generalization in Visual Reinforcement Learning
Viaarxiv icon

Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse

Add code
May 29, 2023
Figure 1 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Figure 2 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Figure 3 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Figure 4 for Off-Policy RL Algorithms Can be Sample-Efficient for Continuous Control via Sample Multiple Reuse
Viaarxiv icon