Alert button
Picture for Zhengyao Jiang

Zhengyao Jiang

Alert button

H-GAP: Humanoid Control with a Generalist Planner

Dec 05, 2023
Zhengyao Jiang, Yingchen Xu, Nolan Wagener, Yicheng Luo, Michael Janner, Edward Grefenstette, Tim Rocktäschel, Yuandong Tian

Viaarxiv icon

Mildly Constrained Evaluation Policy for Offline Reinforcement Learning

Jun 06, 2023
Linjie Xu, Zhengyao Jiang, Jinyu Wang, Lei Song, Jiang Bian

Figure 1 for Mildly Constrained Evaluation Policy for Offline Reinforcement Learning
Figure 2 for Mildly Constrained Evaluation Policy for Offline Reinforcement Learning
Figure 3 for Mildly Constrained Evaluation Policy for Offline Reinforcement Learning
Figure 4 for Mildly Constrained Evaluation Policy for Offline Reinforcement Learning
Viaarxiv icon

Optimal Transport for Offline Imitation Learning

Mar 24, 2023
Yicheng Luo, Zhengyao Jiang, Samuel Cohen, Edward Grefenstette, Marc Peter Deisenroth

Figure 1 for Optimal Transport for Offline Imitation Learning
Figure 2 for Optimal Transport for Offline Imitation Learning
Figure 3 for Optimal Transport for Offline Imitation Learning
Figure 4 for Optimal Transport for Offline Imitation Learning
Viaarxiv icon

Efficient Planning in a Compact Latent Action Space

Aug 25, 2022
Zhengyao Jiang, Tianjun Zhang, Michael Janner, Yueying Li, Tim Rocktäschel, Edward Grefenstette, Yuandong Tian

Figure 1 for Efficient Planning in a Compact Latent Action Space
Figure 2 for Efficient Planning in a Compact Latent Action Space
Figure 3 for Efficient Planning in a Compact Latent Action Space
Figure 4 for Efficient Planning in a Compact Latent Action Space
Viaarxiv icon

Graph Backup: Data Efficient Backup Exploiting Markovian Transitions

May 31, 2022
Zhengyao Jiang, Tianjun Zhang, Robert Kirk, Tim Rocktäschel, Edward Grefenstette

Figure 1 for Graph Backup: Data Efficient Backup Exploiting Markovian Transitions
Figure 2 for Graph Backup: Data Efficient Backup Exploiting Markovian Transitions
Figure 3 for Graph Backup: Data Efficient Backup Exploiting Markovian Transitions
Figure 4 for Graph Backup: Data Efficient Backup Exploiting Markovian Transitions
Viaarxiv icon

Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning

Feb 08, 2021
Zhengyao Jiang, Pasquale Minervini, Minqi Jiang, Tim Rocktaschel

Figure 1 for Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Figure 2 for Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Figure 3 for Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Figure 4 for Grid-to-Graph: Flexible Spatial Relational Inductive Biases for Reinforcement Learning
Viaarxiv icon

Neural Logic Reinforcement Learning

Apr 24, 2019
Zhengyao Jiang, Shan Luo

Figure 1 for Neural Logic Reinforcement Learning
Figure 2 for Neural Logic Reinforcement Learning
Figure 3 for Neural Logic Reinforcement Learning
Figure 4 for Neural Logic Reinforcement Learning
Viaarxiv icon

A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem

Jul 16, 2017
Zhengyao Jiang, Dixing Xu, Jinjun Liang

Figure 1 for A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem
Figure 2 for A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem
Figure 3 for A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem
Figure 4 for A Deep Reinforcement Learning Framework for the Financial Portfolio Management Problem
Viaarxiv icon

Cryptocurrency Portfolio Management with Deep Reinforcement Learning

May 11, 2017
Zhengyao Jiang, Jinjun Liang

Figure 1 for Cryptocurrency Portfolio Management with Deep Reinforcement Learning
Figure 2 for Cryptocurrency Portfolio Management with Deep Reinforcement Learning
Figure 3 for Cryptocurrency Portfolio Management with Deep Reinforcement Learning
Figure 4 for Cryptocurrency Portfolio Management with Deep Reinforcement Learning
Viaarxiv icon