Picture for Tianying Ji

Tianying Ji

Bidirectional-Reachable Hierarchical Reinforcement Learning with Mutually Responsive Policies

Add code
Jun 26, 2024
Viaarxiv icon

RoboGolf: Mastering Real-World Minigolf with a Reflective Multi-Modality Vision-Language Model

Add code
Jun 14, 2024
Viaarxiv icon

OMPO: A Unified Framework for RL under Policy and Dynamics Shifts

Add code
May 29, 2024
Viaarxiv icon

Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL

Add code
May 28, 2024
Figure 1 for Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL
Figure 2 for Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL
Figure 3 for Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL
Figure 4 for Offline-Boosted Actor-Critic: Adaptively Blending Optimal Historical Behaviors in Deep Off-Policy RL
Viaarxiv icon

Scrutinize What We Ignore: Reining Task Representation Shift In Context-Based Offline Meta Reinforcement Learning

Add code
May 20, 2024
Figure 1 for Scrutinize What We Ignore: Reining Task Representation Shift In Context-Based Offline Meta Reinforcement Learning
Figure 2 for Scrutinize What We Ignore: Reining Task Representation Shift In Context-Based Offline Meta Reinforcement Learning
Figure 3 for Scrutinize What We Ignore: Reining Task Representation Shift In Context-Based Offline Meta Reinforcement Learning
Figure 4 for Scrutinize What We Ignore: Reining Task Representation Shift In Context-Based Offline Meta Reinforcement Learning
Viaarxiv icon

ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization

Add code
Feb 22, 2024
Figure 1 for ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization
Figure 2 for ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization
Figure 3 for ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization
Figure 4 for ACE : Off-Policy Actor-Critic with Causality-Aware Entropy Regularization
Viaarxiv icon

DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization

Add code
Oct 30, 2023
Figure 1 for DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization
Figure 2 for DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization
Figure 3 for DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization
Figure 4 for DrM: Mastering Visual Reinforcement Learning through Dormant Ratio Minimization
Viaarxiv icon

H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps

Add code
Sep 22, 2023
Figure 1 for H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
Figure 2 for H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
Figure 3 for H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
Figure 4 for H2O+: An Improved Framework for Hybrid Offline-and-Online RL with Dynamics Gaps
Viaarxiv icon

Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic

Add code
Jun 06, 2023
Figure 1 for Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
Figure 2 for Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
Figure 3 for Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
Figure 4 for Seizing Serendipity: Exploiting the Value of Past Success in Off-Policy Actor-Critic
Viaarxiv icon

When to Update Your Model: Constrained Model-based Reinforcement Learning

Add code
Oct 15, 2022
Figure 1 for When to Update Your Model: Constrained Model-based Reinforcement Learning
Figure 2 for When to Update Your Model: Constrained Model-based Reinforcement Learning
Figure 3 for When to Update Your Model: Constrained Model-based Reinforcement Learning
Figure 4 for When to Update Your Model: Constrained Model-based Reinforcement Learning
Viaarxiv icon