Picture for Linrui Zhang

Linrui Zhang

Chemistry3D: Robotic Interaction Benchmark for Chemistry Experiments

Add code
Jun 12, 2024
Viaarxiv icon

CAT: Closed-loop Adversarial Training for Safe End-to-End Driving

Add code
Oct 19, 2023
Viaarxiv icon

DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning

Add code
Oct 09, 2023
Figure 1 for DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning
Figure 2 for DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning
Figure 3 for DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning
Figure 4 for DiffCPS: Diffusion Model based Constrained Policy Search for Offline Reinforcement Learning
Viaarxiv icon

Are Large Language Models Really Robust to Word-Level Perturbations?

Add code
Sep 27, 2023
Figure 1 for Are Large Language Models Really Robust to Word-Level Perturbations?
Figure 2 for Are Large Language Models Really Robust to Word-Level Perturbations?
Figure 3 for Are Large Language Models Really Robust to Word-Level Perturbations?
Figure 4 for Are Large Language Models Really Robust to Word-Level Perturbations?
Viaarxiv icon

Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning

Add code
May 25, 2023
Figure 1 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 2 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 3 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Figure 4 for Learning Better with Less: Effective Augmentation for Sample-Efficient Visual Reinforcement Learning
Viaarxiv icon

SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning

Add code
Jan 28, 2023
Figure 1 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Figure 2 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Figure 3 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Figure 4 for SaFormer: A Conditional Sequence Modeling Approach to Offline Safe Reinforcement Learning
Viaarxiv icon

Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning

Add code
Dec 14, 2022
Figure 1 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Figure 2 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Figure 3 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Figure 4 for Safety Correction from Baseline: Towards the Risk-aware Policy in Robotics via Dual-agent Reinforcement Learning
Viaarxiv icon

Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks

Add code
Dec 12, 2022
Figure 1 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Figure 2 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Figure 3 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Figure 4 for Evaluating Model-free Reinforcement Learning toward Safety-critical Tasks
Viaarxiv icon

Constrained Update Projection Approach to Safe Policy Optimization

Add code
Sep 15, 2022
Figure 1 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 2 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 3 for Constrained Update Projection Approach to Safe Policy Optimization
Figure 4 for Constrained Update Projection Approach to Safe Policy Optimization
Viaarxiv icon

SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving

Add code
Jun 17, 2022
Figure 1 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Figure 2 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Figure 3 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Figure 4 for SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods for Safe Autonomous Driving
Viaarxiv icon