Alert button
Picture for Junyu Zhang

Junyu Zhang

Alert button

Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design

Add code
Bookmark button
Alert button
Nov 02, 2023
Heng Dong, Junyu Zhang, Chongjie Zhang

Viaarxiv icon

Synthesizing Physically Plausible Human Motions in 3D Scenes

Add code
Bookmark button
Alert button
Aug 17, 2023
Liang Pan, Jingbo Wang, Buzhen Huang, Junyu Zhang, Haofan Wang, Xu Tang, Yangang Wang

Figure 1 for Synthesizing Physically Plausible Human Motions in 3D Scenes
Figure 2 for Synthesizing Physically Plausible Human Motions in 3D Scenes
Figure 3 for Synthesizing Physically Plausible Human Motions in 3D Scenes
Figure 4 for Synthesizing Physically Plausible Human Motions in 3D Scenes
Viaarxiv icon

Offline Meta Reinforcement Learning with In-Distribution Online Adaptation

Add code
Bookmark button
Alert button
Jun 01, 2023
Jianhao Wang, Jin Zhang, Haozhe Jiang, Junyu Zhang, Liwei Wang, Chongjie Zhang

Figure 1 for Offline Meta Reinforcement Learning with In-Distribution Online Adaptation
Figure 2 for Offline Meta Reinforcement Learning with In-Distribution Online Adaptation
Figure 3 for Offline Meta Reinforcement Learning with In-Distribution Online Adaptation
Figure 4 for Offline Meta Reinforcement Learning with In-Distribution Online Adaptation
Viaarxiv icon

Symmetry-Aware Robot Design with Structured Subgroups

Add code
Bookmark button
Alert button
May 31, 2023
Heng Dong, Junyu Zhang, Tonghan Wang, Chongjie Zhang

Figure 1 for Symmetry-Aware Robot Design with Structured Subgroups
Figure 2 for Symmetry-Aware Robot Design with Structured Subgroups
Figure 3 for Symmetry-Aware Robot Design with Structured Subgroups
Figure 4 for Symmetry-Aware Robot Design with Structured Subgroups
Viaarxiv icon

Provably Efficient Gauss-Newton Temporal Difference Learning Method with Function Approximation

Add code
Bookmark button
Alert button
Feb 25, 2023
Zhifa Ke, Zaiwen Wen, Junyu Zhang

Figure 1 for Provably Efficient Gauss-Newton Temporal Difference Learning Method with Function Approximation
Figure 2 for Provably Efficient Gauss-Newton Temporal Difference Learning Method with Function Approximation
Figure 3 for Provably Efficient Gauss-Newton Temporal Difference Learning Method with Function Approximation
Figure 4 for Provably Efficient Gauss-Newton Temporal Difference Learning Method with Function Approximation
Viaarxiv icon

A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP

Add code
Bookmark button
Alert button
Jul 13, 2022
Fan Chen, Junyu Zhang, Zaiwen Wen

Figure 1 for A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP
Viaarxiv icon

On the Sample Complexity and Metastability of Heavy-tailed Policy Search in Continuous Control

Add code
Bookmark button
Alert button
Jun 15, 2021
Amrit Singh Bedi, Anjaly Parayil, Junyu Zhang, Mengdi Wang, Alec Koppel

Figure 1 for On the Sample Complexity and Metastability of Heavy-tailed Policy Search in Continuous Control
Figure 2 for On the Sample Complexity and Metastability of Heavy-tailed Policy Search in Continuous Control
Figure 3 for On the Sample Complexity and Metastability of Heavy-tailed Policy Search in Continuous Control
Figure 4 for On the Sample Complexity and Metastability of Heavy-tailed Policy Search in Continuous Control
Viaarxiv icon

MARL with General Utilities via Decentralized Shadow Reward Actor-Critic

Add code
Bookmark button
Alert button
May 29, 2021
Junyu Zhang, Amrit Singh Bedi, Mengdi Wang, Alec Koppel

Figure 1 for MARL with General Utilities via Decentralized Shadow Reward Actor-Critic
Figure 2 for MARL with General Utilities via Decentralized Shadow Reward Actor-Critic
Figure 3 for MARL with General Utilities via Decentralized Shadow Reward Actor-Critic
Figure 4 for MARL with General Utilities via Decentralized Shadow Reward Actor-Critic
Viaarxiv icon

On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method

Add code
Bookmark button
Alert button
Feb 17, 2021
Junyu Zhang, Chengzhuo Ni, Zheng Yu, Csaba Szepesvari, Mengdi Wang

Figure 1 for On the Convergence and Sample Efficiency of Variance-Reduced Policy Gradient Method
Viaarxiv icon

Variational Policy Gradient Method for Reinforcement Learning with General Utilities

Add code
Bookmark button
Alert button
Jul 04, 2020
Junyu Zhang, Alec Koppel, Amrit Singh Bedi, Csaba Szepesvari, Mengdi Wang

Figure 1 for Variational Policy Gradient Method for Reinforcement Learning with General Utilities
Figure 2 for Variational Policy Gradient Method for Reinforcement Learning with General Utilities
Figure 3 for Variational Policy Gradient Method for Reinforcement Learning with General Utilities
Viaarxiv icon