Alert button
Picture for Bo Cheng

Bo Cheng

Alert button

Recurrent Model Predictive Control

Add code
Bookmark button
Alert button
Feb 23, 2021
Zhengyu Liu, Jingliang Duan, Wenxuan Wang, Shengbo Eben Li, Yuming Yin, Ziyu Lin, Qi Sun, Bo Cheng

Figure 1 for Recurrent Model Predictive Control
Figure 2 for Recurrent Model Predictive Control
Figure 3 for Recurrent Model Predictive Control
Figure 4 for Recurrent Model Predictive Control
Viaarxiv icon

Mixed Policy Gradient

Add code
Bookmark button
Alert button
Feb 23, 2021
Yang Guan, Jingliang Duan, Shengbo Eben Li, Jie Li, Jianyu Chen, Bo Cheng

Figure 1 for Mixed Policy Gradient
Figure 2 for Mixed Policy Gradient
Figure 3 for Mixed Policy Gradient
Figure 4 for Mixed Policy Gradient
Viaarxiv icon

Mixed Reinforcement Learning with Additive Stochastic Uncertainty

Add code
Bookmark button
Alert button
Feb 28, 2020
Yao Mu, Shengbo Eben Li, Chang Liu, Qi Sun, Bingbing Nie, Bo Cheng, Baiyu Peng

Figure 1 for Mixed Reinforcement Learning with Additive Stochastic Uncertainty
Figure 2 for Mixed Reinforcement Learning with Additive Stochastic Uncertainty
Figure 3 for Mixed Reinforcement Learning with Additive Stochastic Uncertainty
Figure 4 for Mixed Reinforcement Learning with Additive Stochastic Uncertainty
Viaarxiv icon

Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors

Add code
Bookmark button
Alert button
Feb 23, 2020
Jingliang Duan, Yang Guan, Shengbo Eben Li, Yangang Ren, Bo Cheng

Figure 1 for Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors
Figure 2 for Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors
Figure 3 for Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors
Figure 4 for Distributional Soft Actor-Critic: Off-Policy Reinforcement Learning for Addressing Value Estimation Errors
Viaarxiv icon

Addressing Value Estimation Errors in Reinforcement Learning with a State-Action Return Distribution Function

Add code
Bookmark button
Alert button
Jan 09, 2020
Jingliang Duan, Yang Guan, Yangang Ren, Shengbo Eben Li, Bo Cheng

Figure 1 for Addressing Value Estimation Errors in Reinforcement Learning with a State-Action Return Distribution Function
Figure 2 for Addressing Value Estimation Errors in Reinforcement Learning with a State-Action Return Distribution Function
Figure 3 for Addressing Value Estimation Errors in Reinforcement Learning with a State-Action Return Distribution Function
Figure 4 for Addressing Value Estimation Errors in Reinforcement Learning with a State-Action Return Distribution Function
Viaarxiv icon

Direct and indirect reinforcement learning

Add code
Bookmark button
Alert button
Dec 23, 2019
Yang Guan, Shengbo Eben Li, Jingliang Duan, Jie Li, Yangang Ren, Bo Cheng

Figure 1 for Direct and indirect reinforcement learning
Figure 2 for Direct and indirect reinforcement learning
Figure 3 for Direct and indirect reinforcement learning
Figure 4 for Direct and indirect reinforcement learning
Viaarxiv icon

Deep adaptive dynamic programming for nonaffine nonlinear optimal control problem with state constraints

Add code
Bookmark button
Alert button
Nov 26, 2019
Jingliang Duan, Zhengyu Liu, Shengbo Eben Li, Qi Sun, Zhenzhong Jia, Bo Cheng

Figure 1 for Deep adaptive dynamic programming for nonaffine nonlinear optimal control problem with state constraints
Figure 2 for Deep adaptive dynamic programming for nonaffine nonlinear optimal control problem with state constraints
Figure 3 for Deep adaptive dynamic programming for nonaffine nonlinear optimal control problem with state constraints
Figure 4 for Deep adaptive dynamic programming for nonaffine nonlinear optimal control problem with state constraints
Viaarxiv icon

Generalized Policy Iteration for Optimal Control in Continuous Time

Add code
Bookmark button
Alert button
Sep 11, 2019
Jingliang Duan, Shengbo Eben Li, Zhengyu Liu, Monimoy Bujarbaruah, Bo Cheng

Figure 1 for Generalized Policy Iteration for Optimal Control in Continuous Time
Figure 2 for Generalized Policy Iteration for Optimal Control in Continuous Time
Figure 3 for Generalized Policy Iteration for Optimal Control in Continuous Time
Figure 4 for Generalized Policy Iteration for Optimal Control in Continuous Time
Viaarxiv icon

Intention-aware Long Horizon Trajectory Prediction of Surrounding Vehicles using Dual LSTM Networks

Add code
Bookmark button
Alert button
Jun 06, 2019
Long Xin, Pin Wang, Ching-Yao Chan, Jianyu Chen, Shengbo Eben Li, Bo Cheng

Figure 1 for Intention-aware Long Horizon Trajectory Prediction of Surrounding Vehicles using Dual LSTM Networks
Figure 2 for Intention-aware Long Horizon Trajectory Prediction of Surrounding Vehicles using Dual LSTM Networks
Figure 3 for Intention-aware Long Horizon Trajectory Prediction of Surrounding Vehicles using Dual LSTM Networks
Figure 4 for Intention-aware Long Horizon Trajectory Prediction of Surrounding Vehicles using Dual LSTM Networks
Viaarxiv icon

Indirect Shared Control of Highly Automated Vehicles for Cooperative Driving between Driver and Automation

Add code
Bookmark button
Alert button
Apr 04, 2017
Renjie Li, Yanan Li, Shengbo Eben Li, Etienne Burdet, Bo Cheng

Figure 1 for Indirect Shared Control of Highly Automated Vehicles for Cooperative Driving between Driver and Automation
Figure 2 for Indirect Shared Control of Highly Automated Vehicles for Cooperative Driving between Driver and Automation
Figure 3 for Indirect Shared Control of Highly Automated Vehicles for Cooperative Driving between Driver and Automation
Figure 4 for Indirect Shared Control of Highly Automated Vehicles for Cooperative Driving between Driver and Automation
Viaarxiv icon