Alert button
Picture for Jianye Hao

Jianye Hao

Alert button

Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents

Add code
Bookmark button
Alert button
Dec 18, 2022
Minghuan Liu, Zhengbang Zhu, Menghui Zhu, Yuzheng Zhuang, Weinan Zhang, Jianye Hao

Figure 1 for Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents
Figure 2 for Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents
Figure 3 for Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents
Figure 4 for Planning Immediate Landmarks of Targets for Model-Free Skill Transfer across Agents
Viaarxiv icon

State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 28, 2022
Chen Chen, Hongyao Tang, Yi Ma, Chao Wang, Qianli Shen, Dong Li, Jianye Hao

Figure 1 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Figure 2 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Figure 3 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Figure 4 for State-Aware Proximal Pessimistic Algorithms for Offline Reinforcement Learning
Viaarxiv icon

Prototypical context-aware dynamics generalization for high-dimensional model-based reinforcement learning

Add code
Bookmark button
Alert button
Nov 23, 2022
Junjie Wang, Yao Mu, Dong Li, Qichao Zhang, Dongbin Zhao, Yuzheng Zhuang, Ping Luo, Bin Wang, Jianye Hao

Figure 1 for Prototypical context-aware dynamics generalization for high-dimensional model-based reinforcement learning
Figure 2 for Prototypical context-aware dynamics generalization for high-dimensional model-based reinforcement learning
Figure 3 for Prototypical context-aware dynamics generalization for high-dimensional model-based reinforcement learning
Figure 4 for Prototypical context-aware dynamics generalization for high-dimensional model-based reinforcement learning
Viaarxiv icon

RITA: Boost Autonomous Driving Simulators with Realistic Interactive Traffic Flow

Add code
Bookmark button
Alert button
Nov 11, 2022
Zhengbang Zhu, Shenyu Zhang, Yuzheng Zhuang, Yuecheng Liu, Minghuan Liu, Liyuan Mao, Ziqing Gong, Weinan Zhang, Shixiong Kai, Qiang Gu, Bin Wang, Siyuan Cheng, Xinyu Wang, Jianye Hao, Yong Yu

Figure 1 for RITA: Boost Autonomous Driving Simulators with Realistic Interactive Traffic Flow
Figure 2 for RITA: Boost Autonomous Driving Simulators with Realistic Interactive Traffic Flow
Figure 3 for RITA: Boost Autonomous Driving Simulators with Realistic Interactive Traffic Flow
Figure 4 for RITA: Boost Autonomous Driving Simulators with Realistic Interactive Traffic Flow
Viaarxiv icon

ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation

Add code
Bookmark button
Alert button
Oct 26, 2022
Pengyi Li, Hongyao Tang, Jianye Hao, Yan Zheng, Xian Fu, Zhaopeng Meng

Figure 1 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Figure 2 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Figure 3 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Figure 4 for ERL-Re$^2$: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
Viaarxiv icon

PTDE: Personalized Training with Distillated Execution for Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 17, 2022
Yiqun Chen, Hangyu Mao, Tianle Zhang, Shiguang Wu, Bin Zhang, Jianye Hao, Dong Li, Bin Wang, Hongxing Chang

Figure 1 for PTDE: Personalized Training with Distillated Execution for Multi-Agent Reinforcement Learning
Figure 2 for PTDE: Personalized Training with Distillated Execution for Multi-Agent Reinforcement Learning
Figure 3 for PTDE: Personalized Training with Distillated Execution for Multi-Agent Reinforcement Learning
Figure 4 for PTDE: Personalized Training with Distillated Execution for Multi-Agent Reinforcement Learning
Viaarxiv icon

GFlowCausal: Generative Flow Networks for Causal Discovery

Add code
Bookmark button
Alert button
Oct 15, 2022
Wenqian Li, Yinchuan Li, Shengyu Zhu, Yunfeng Shao, Jianye Hao, Yan Pang

Figure 1 for GFlowCausal: Generative Flow Networks for Causal Discovery
Figure 2 for GFlowCausal: Generative Flow Networks for Causal Discovery
Figure 3 for GFlowCausal: Generative Flow Networks for Causal Discovery
Figure 4 for GFlowCausal: Generative Flow Networks for Causal Discovery
Viaarxiv icon

Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 09, 2022
Yao Mu, Yuzheng Zhuang, Fei Ni, Bin Wang, Jianyu Chen, Jianye Hao, Ping Luo

Figure 1 for Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning
Figure 2 for Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning
Figure 3 for Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning
Figure 4 for Decomposed Mutual Information Optimization for Generalized Context in Meta-Reinforcement Learning
Viaarxiv icon

EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model

Add code
Bookmark button
Alert button
Oct 02, 2022
Yifu Yuan, Jianye Hao, Fei Ni, Yao Mu, Yan Zheng, Yujing Hu, Jinyi Liu, Yingfeng Chen, Changjie Fan

Figure 1 for EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model
Figure 2 for EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model
Figure 3 for EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model
Figure 4 for EUCLID: Towards Efficient Unsupervised Reinforcement Learning with Multi-choice Dynamics Model
Viaarxiv icon