Picture for Qingfeng Lan

Qingfeng Lan

Weight Clipping for Deep Continual and Reinforcement Learning

Add code
Jul 01, 2024
Figure 1 for Weight Clipping for Deep Continual and Reinforcement Learning
Figure 2 for Weight Clipping for Deep Continual and Reinforcement Learning
Figure 3 for Weight Clipping for Deep Continual and Reinforcement Learning
Figure 4 for Weight Clipping for Deep Continual and Reinforcement Learning
Viaarxiv icon

More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling

Add code
Jun 18, 2024
Figure 1 for More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling
Figure 2 for More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling
Figure 3 for More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling
Figure 4 for More Efficient Randomized Exploration for Reinforcement Learning via Approximate Sampling
Viaarxiv icon

Elephant Neural Networks: Born to Be a Continual Learner

Add code
Oct 02, 2023
Figure 1 for Elephant Neural Networks: Born to Be a Continual Learner
Figure 2 for Elephant Neural Networks: Born to Be a Continual Learner
Figure 3 for Elephant Neural Networks: Born to Be a Continual Learner
Figure 4 for Elephant Neural Networks: Born to Be a Continual Learner
Viaarxiv icon

Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo

Add code
May 29, 2023
Figure 1 for Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo
Figure 2 for Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo
Figure 3 for Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo
Figure 4 for Provable and Practical: Efficient Exploration in Reinforcement Learning via Langevin Monte Carlo
Viaarxiv icon

Learning to Optimize for Reinforcement Learning

Add code
Feb 03, 2023
Viaarxiv icon

Memory-efficient Reinforcement Learning with Knowledge Consolidation

Add code
May 22, 2022
Figure 1 for Memory-efficient Reinforcement Learning with Knowledge Consolidation
Figure 2 for Memory-efficient Reinforcement Learning with Knowledge Consolidation
Figure 3 for Memory-efficient Reinforcement Learning with Knowledge Consolidation
Figure 4 for Memory-efficient Reinforcement Learning with Knowledge Consolidation
Viaarxiv icon

Variational Quantum Soft Actor-Critic

Add code
Dec 20, 2021
Figure 1 for Variational Quantum Soft Actor-Critic
Figure 2 for Variational Quantum Soft Actor-Critic
Figure 3 for Variational Quantum Soft Actor-Critic
Figure 4 for Variational Quantum Soft Actor-Critic
Viaarxiv icon

Predictive Representation Learning for Language Modeling

Add code
May 29, 2021
Figure 1 for Predictive Representation Learning for Language Modeling
Figure 2 for Predictive Representation Learning for Language Modeling
Figure 3 for Predictive Representation Learning for Language Modeling
Figure 4 for Predictive Representation Learning for Language Modeling
Viaarxiv icon

Model-free Policy Learning with Reward Gradients

Add code
Mar 09, 2021
Figure 1 for Model-free Policy Learning with Reward Gradients
Figure 2 for Model-free Policy Learning with Reward Gradients
Figure 3 for Model-free Policy Learning with Reward Gradients
Figure 4 for Model-free Policy Learning with Reward Gradients
Viaarxiv icon

Maxmin Q-learning: Controlling the Estimation Bias of Q-learning

Add code
Feb 16, 2020
Figure 1 for Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Figure 2 for Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Figure 3 for Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Figure 4 for Maxmin Q-learning: Controlling the Estimation Bias of Q-learning
Viaarxiv icon