Alert button
Picture for Hongyao Tang

Hongyao Tang

Alert button

Bridging Evolutionary Algorithms and Reinforcement Learning: A Comprehensive Survey

Jan 22, 2024
Pengyi Li, Jianye Hao, Hongyao Tang, Xian Fu, Yan Zheng, Ke Tang

Viaarxiv icon

The Ladder in Chaos: A Simple and Effective Improvement to General DRL Algorithms by Policy Path Trimming and Boosting

Mar 02, 2023
Hongyao Tang, Min Zhang, Jianye Hao

Figure 1 for The Ladder in Chaos: A Simple and Effective Improvement to General DRL Algorithms by Policy Path Trimming and Boosting
Figure 2 for The Ladder in Chaos: A Simple and Effective Improvement to General DRL Algorithms by Policy Path Trimming and Boosting
Figure 3 for The Ladder in Chaos: A Simple and Effective Improvement to General DRL Algorithms by Policy Path Trimming and Boosting
Figure 4 for The Ladder in Chaos: A Simple and Effective Improvement to General DRL Algorithms by Policy Path Trimming and Boosting
Viaarxiv icon

State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning

Nov 28, 2022
Chen Chen, Hongyao Tang, Yi Ma, Chao Wang, Qianli Shen, Dong Li, Jianye Hao

Figure 1 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Figure 2 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Figure 3 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Figure 4 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Viaarxiv icon

ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation

Oct 26, 2022
Pengyi Li, Hongyao Tang, Jianye Hao, Yan Zheng, Xian Fu, Zhaopeng Meng

Figure 1 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Figure 2 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Figure 3 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Figure 4 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Viaarxiv icon

Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes

Sep 16, 2022
Min Zhang, Hongyao Tang, Jianye Hao, Yan Zheng

Figure 1 for Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes
Figure 2 for Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes
Figure 3 for Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes
Figure 4 for Towards A Unified Policy Abstraction Theory and Representation Learning Approach in Markov Decision Processes
Viaarxiv icon

PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations

Apr 06, 2022
Tong Sang, Hongyao Tang, Yi Ma, Jianye Hao, Yan Zheng, Zhaopeng Meng, Boyan Li, Zhen Wang

Figure 1 for PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations
Figure 2 for PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations
Figure 3 for PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations
Figure 4 for PAnDR: Fast Adaptation to New Environments from Offline Experiences via Decoupling Policy and Environment Representations
Viaarxiv icon

PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration

Mar 16, 2022
Pengyi Li, Hongyao Tang, Tianpei Yang, Xiaotian Hao, Tong Sang, Yan Zheng, Jianye Hao, Matthew E. Taylor, Zhen Wang

Figure 1 for PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration
Figure 2 for PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration
Figure 3 for PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration
Figure 4 for PMIC: Improving Multi-Agent Reinforcement Learning with Progressive Mutual Information Collaboration
Viaarxiv icon

ED2: An Environment Dynamics Decomposition Framework for World Model Construction

Dec 06, 2021
Cong Wang, Tianpei Yang, Jianye Hao, Yan Zheng, Hongyao Tang, Fazl Barez, Jinyi Liu, Jiajie Peng, Haiyin Piao, Zhixiao Sun

Figure 1 for ED2: An Environment Dynamics Decomposition Framework for World Model Construction
Figure 2 for ED2: An Environment Dynamics Decomposition Framework for World Model Construction
Figure 3 for ED2: An Environment Dynamics Decomposition Framework for World Model Construction
Figure 4 for ED2: An Environment Dynamics Decomposition Framework for World Model Construction
Viaarxiv icon

Uncertainty-aware Low-Rank Q-Matrix Estimation for Deep Reinforcement Learning

Nov 19, 2021
Tong Sang, Hongyao Tang, Jianye Hao, Yan Zheng, Zhaopeng Meng

Figure 1 for Uncertainty-aware Low-Rank Q-Matrix Estimation for Deep Reinforcement Learning
Figure 2 for Uncertainty-aware Low-Rank Q-Matrix Estimation for Deep Reinforcement Learning
Figure 3 for Uncertainty-aware Low-Rank Q-Matrix Estimation for Deep Reinforcement Learning
Figure 4 for Uncertainty-aware Low-Rank Q-Matrix Estimation for Deep Reinforcement Learning
Viaarxiv icon

Exploration in Deep Reinforcement Learning: A Comprehensive Survey

Sep 15, 2021
Tianpei Yang, Hongyao Tang, Chenjia Bai, Jinyi Liu, Jianye Hao, Zhaopeng Meng, Peng Liu

Figure 1 for Exploration in Deep Reinforcement Learning: A Comprehensive Survey
Figure 2 for Exploration in Deep Reinforcement Learning: A Comprehensive Survey
Figure 3 for Exploration in Deep Reinforcement Learning: A Comprehensive Survey
Figure 4 for Exploration in Deep Reinforcement Learning: A Comprehensive Survey
Viaarxiv icon