Alert button
Picture for Linrui Zhang

Linrui Zhang

Alert button

CAT: Closed-loop Adversarial Training for Safe End-to-End Driving

Add code
Bookmark button
Alert button
Oct 19, 2023
Linrui Zhang, Zhenghao Peng, Quanyi Li, Bolei Zhou

Viaarxiv icon

DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning

Add code
Bookmark button
Alert button
Oct 09, 2023
Longxiang He, Linrui Zhang, Junbo Tan, Xueqian Wang

Viaarxiv icon

Are Large Language Models Really Robust to Word-Level Perturbations?

Add code
Bookmark button
Alert button
Sep 27, 2023
Haoyu Wang, Guozheng Ma, Cong Yu, Ning Gui, Linrui Zhang, Zhiqi Huang, Suwei Ma, Yongzhe Chang, Sen Zhang, Li Shen, Xueqian Wang, Peilin Zhao, Dacheng Tao

Figure 1 for Are Large Language Models Really Robust to Word-Level Perturbations?
Figure 2 for Are Large Language Models Really Robust to Word-Level Perturbations?
Figure 3 for Are Large Language Models Really Robust to Word-Level Perturbations?
Figure 4 for Are Large Language Models Really Robust to Word-Level Perturbations?
Viaarxiv icon

Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning

Add code
Bookmark button
Alert button
May 25, 2023
Guozheng Ma, Linrui Zhang, Haoyu Wang, Lu Li, Zilin Wang, Zhen Wang, Li Shen, Xueqian Wang, Dacheng Tao

Figure 1 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 2 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 3 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 4 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Viaarxiv icon

SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 28, 2023
Qin Zhang, Linrui Zhang, Haoran Xu, Li Shen, Bowen Wang, Yongzhe Chang, Xueqian Wang, Bo Yuan, Dacheng Tao

Figure 1 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Figure 2 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Figure 3 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Figure 4 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Viaarxiv icon

Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 14, 2022
Linrui Zhang, Zichen Yan, Li Shen, Shoujie Li, Xueqian Wang, Dacheng Tao

Figure 1 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Figure 2 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Figure 3 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Figure 4 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Viaarxiv icon

Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks

Add code
Bookmark button
Alert button
Dec 12, 2022
Linrui Zhang, Qin Zhang, Li Shen, Bo Yuan, Xueqian Wang, Dacheng Tao

Figure 1 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Figure 2 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Figure 3 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Figure 4 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Viaarxiv icon

Constrained Update Projection Approach to Safe Policy Optimization

Add code
Bookmark button
Alert button
Sep 15, 2022
Long Yang, Jiaming Ji, Juntao Dai, Linrui Zhang, Binbin Zhou, Pengfei Li, Yaodong Yang, Gang Pan

Figure 1 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 2 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 3 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 4 for Constrained Update Projection Approach to Safe Policy Optimization
Viaarxiv icon

SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving

Add code
Bookmark button
Alert button
Jun 17, 2022
Linrui Zhang, Qin Zhang, Li Shen, Bo Yuan, Xueqian Wang

Figure 1 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Figure 2 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Figure 3 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Figure 4 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Viaarxiv icon