Alert button
Picture for Zongqing Lu

Zongqing Lu

Alert button

Multi-Agent Sequential Decision-Making via Communication

Add code
Bookmark button
Alert button
Sep 26, 2022
Ziluo Ding, Kefan Su, Weixin Hong, Liwen Zhu, Tiejun Huang, Zongqing Lu

Figure 1 for Multi-Agent Sequential Decision-Making via Communication
Figure 2 for Multi-Agent Sequential Decision-Making via Communication
Figure 3 for Multi-Agent Sequential Decision-Making via Communication
Figure 4 for Multi-Agent Sequential Decision-Making via Communication
Viaarxiv icon

More Centralized Training, Still Decentralized Execution: Multi-Agent Conditional Policy Factorization

Add code
Bookmark button
Alert button
Sep 26, 2022
Jiangxing Wang, Deheng Ye, Zongqing Lu

Figure 1 for More Centralized Training, Still Decentralized Execution: Multi-Agent Conditional Policy Factorization
Figure 2 for More Centralized Training, Still Decentralized Execution: Multi-Agent Conditional Policy Factorization
Figure 3 for More Centralized Training, Still Decentralized Execution: Multi-Agent Conditional Policy Factorization
Figure 4 for More Centralized Training, Still Decentralized Execution: Multi-Agent Conditional Policy Factorization
Viaarxiv icon

MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
Sep 17, 2022
Kefan Su, Siyuan Zhou, Chuang Gan, Xiangjun Wang, Zongqing Lu

Figure 1 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Figure 2 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Figure 3 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Figure 4 for MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent Reinforcement Learning
Viaarxiv icon

Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning

Add code
Bookmark button
Alert button
Jun 21, 2022
Haoqi Yuan, Zongqing Lu

Figure 1 for Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning
Figure 2 for Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning
Figure 3 for Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning
Figure 4 for Robust Task Representations for Offline Meta-Reinforcement Learning via Contrastive Learning
Viaarxiv icon

Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 17, 2022
Yuanpei Chen, Yaodong Yang, Tianhao Wu, Shengjie Wang, Xidong Feng, Jiechuang Jiang, Stephen Marcus McAleer, Hao Dong, Zongqing Lu, Song-Chun Zhu

Figure 1 for Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning
Figure 2 for Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning
Figure 3 for Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning
Figure 4 for Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning
Viaarxiv icon

Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination

Add code
Bookmark button
Alert button
Jun 16, 2022
Jiafei Lyu, Xiu Li, Zongqing Lu

Figure 1 for Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
Figure 2 for Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
Figure 3 for Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
Figure 4 for Double Check Your State Before Trusting It: Confidence-Aware Bidirectional Offline Model-Based Imagination
Viaarxiv icon

Mildly Conservative Q-Learning for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Jun 09, 2022
Jiafei Lyu, Xiaoteng Ma, Xiu Li, Zongqing Lu

Figure 1 for Mildly Conservative Q-Learning for Offline Reinforcement Learning
Figure 2 for Mildly Conservative Q-Learning for Offline Reinforcement Learning
Figure 3 for Mildly Conservative Q-Learning for Offline Reinforcement Learning
Figure 4 for Mildly Conservative Q-Learning for Offline Reinforcement Learning
Viaarxiv icon

Learning to Share in Multi-Agent Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 16, 2021
Yuxuan Yi, Ge Li, Yaowei Wang, Zongqing Lu

Figure 1 for Learning to Share in Multi-Agent Reinforcement Learning
Figure 2 for Learning to Share in Multi-Agent Reinforcement Learning
Figure 3 for Learning to Share in Multi-Agent Reinforcement Learning
Figure 4 for Learning to Share in Multi-Agent Reinforcement Learning
Viaarxiv icon

APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation

Add code
Bookmark button
Alert button
Nov 24, 2021
Jiacheng Chen, Bin-Bin Gao, Zongqing Lu, Jing-Hao Xue, Chengjie Wang, Qingmin Liao

Figure 1 for APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation
Figure 2 for APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation
Figure 3 for APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation
Figure 4 for APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation
Viaarxiv icon

Divergence-Regularized Multi-Agent Actor-Critic

Add code
Bookmark button
Alert button
Oct 01, 2021
Kefan Su, Zongqing Lu

Figure 1 for Divergence-Regularized Multi-Agent Actor-Critic
Figure 2 for Divergence-Regularized Multi-Agent Actor-Critic
Figure 3 for Divergence-Regularized Multi-Agent Actor-Critic
Figure 4 for Divergence-Regularized Multi-Agent Actor-Critic
Viaarxiv icon