Picture for Huayu Chen

Huayu Chen

Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control

Add code
Jul 12, 2024
Figure 1 for Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control
Figure 2 for Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control
Figure 3 for Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control
Figure 4 for Aligning Diffusion Behaviors with Q-functions for Efficient Continuous Control
Viaarxiv icon

C-GAIL: Stabilizing Generative Adversarial Imitation Learning with Control Theory

Add code
Feb 26, 2024
Viaarxiv icon

Noise Contrastive Alignment of Language Models with Explicit Rewards

Add code
Feb 08, 2024
Viaarxiv icon

Score Regularized Policy Optimization through Diffusion Behavior

Add code
Oct 12, 2023
Figure 1 for Score Regularized Policy Optimization through Diffusion Behavior
Figure 2 for Score Regularized Policy Optimization through Diffusion Behavior
Figure 3 for Score Regularized Policy Optimization through Diffusion Behavior
Figure 4 for Score Regularized Policy Optimization through Diffusion Behavior
Viaarxiv icon

Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning

Add code
Apr 25, 2023
Figure 1 for Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning
Figure 2 for Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning
Figure 3 for Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning
Figure 4 for Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning
Viaarxiv icon

Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling

Add code
Sep 29, 2022
Figure 1 for Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling
Figure 2 for Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling
Figure 3 for Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling
Figure 4 for Offline Reinforcement Learning via High-Fidelity Generative Behavior Modeling
Viaarxiv icon

Weight-based Channel-model Matrix Framework: a reasonable solution for EEG-based cross-dataset emotion recognition

Add code
Sep 13, 2022
Figure 1 for Weight-based Channel-model Matrix Framework: a reasonable solution for EEG-based cross-dataset emotion recognition
Figure 2 for Weight-based Channel-model Matrix Framework: a reasonable solution for EEG-based cross-dataset emotion recognition
Figure 3 for Weight-based Channel-model Matrix Framework: a reasonable solution for EEG-based cross-dataset emotion recognition
Figure 4 for Weight-based Channel-model Matrix Framework: a reasonable solution for EEG-based cross-dataset emotion recognition
Viaarxiv icon

Tianshou: a Highly Modularized Deep Reinforcement Learning Library

Add code
Jul 29, 2021
Figure 1 for Tianshou: a Highly Modularized Deep Reinforcement Learning Library
Figure 2 for Tianshou: a Highly Modularized Deep Reinforcement Learning Library
Figure 3 for Tianshou: a Highly Modularized Deep Reinforcement Learning Library
Figure 4 for Tianshou: a Highly Modularized Deep Reinforcement Learning Library
Viaarxiv icon

A study of resting-state EEG biomarkers for depression recognition

Add code
Feb 23, 2020
Figure 1 for A study of resting-state EEG biomarkers for depression recognition
Figure 2 for A study of resting-state EEG biomarkers for depression recognition
Figure 3 for A study of resting-state EEG biomarkers for depression recognition
Figure 4 for A study of resting-state EEG biomarkers for depression recognition
Viaarxiv icon