Picture for Wen Sun

Wen Sun

Refined Value-Based Offline RL under Realizability and Partial Coverage

Add code
Feb 05, 2023
Viaarxiv icon

Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient

Add code
Oct 13, 2022
Figure 1 for Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient
Figure 2 for Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient
Figure 3 for Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient
Figure 4 for Hybrid RL: Using Both Offline and Online Data Can Make RL Efficient
Viaarxiv icon

Sample-efficient Safe Learning for Online Nonlinear Control with Control Barrier Functions

Add code
Jul 29, 2022
Figure 1 for Sample-efficient Safe Learning for Online Nonlinear Control with Control Barrier Functions
Figure 2 for Sample-efficient Safe Learning for Online Nonlinear Control with Control Barrier Functions
Figure 3 for Sample-efficient Safe Learning for Online Nonlinear Control with Control Barrier Functions
Viaarxiv icon

Future-Dependent Value-Based Off-Policy Evaluation in POMDPs

Add code
Jul 26, 2022
Figure 1 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Figure 2 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Figure 3 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Figure 4 for Future-Dependent Value-Based Off-Policy Evaluation in POMDPs
Viaarxiv icon

PAC Reinforcement Learning for Predictive State Representations

Add code
Jul 15, 2022
Figure 1 for PAC Reinforcement Learning for Predictive State Representations
Figure 2 for PAC Reinforcement Learning for Predictive State Representations
Figure 3 for PAC Reinforcement Learning for Predictive State Representations
Figure 4 for PAC Reinforcement Learning for Predictive State Representations
Viaarxiv icon

Learning Bellman Complete Representations for Offline Policy Evaluation

Add code
Jul 12, 2022
Figure 1 for Learning Bellman Complete Representations for Offline Policy Evaluation
Figure 2 for Learning Bellman Complete Representations for Offline Policy Evaluation
Figure 3 for Learning Bellman Complete Representations for Offline Policy Evaluation
Figure 4 for Learning Bellman Complete Representations for Offline Policy Evaluation
Viaarxiv icon

Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings

Add code
Jun 24, 2022
Figure 1 for Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings
Figure 2 for Computationally Efficient PAC RL in POMDPs with Latent Determinism and Conditional Embeddings
Viaarxiv icon

Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems

Add code
Jun 24, 2022
Figure 1 for Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems
Figure 2 for Provably Efficient Reinforcement Learning in Partially Observable Dynamical Systems
Viaarxiv icon

Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation

Add code
Jun 17, 2022
Figure 1 for Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation
Figure 2 for Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation
Figure 3 for Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation
Figure 4 for Minimum Noticeable Difference based Adversarial Privacy Preserving Image Generation
Viaarxiv icon

Provable Benefits of Representational Transfer in Reinforcement Learning

Add code
May 29, 2022
Figure 1 for Provable Benefits of Representational Transfer in Reinforcement Learning
Figure 2 for Provable Benefits of Representational Transfer in Reinforcement Learning
Figure 3 for Provable Benefits of Representational Transfer in Reinforcement Learning
Figure 4 for Provable Benefits of Representational Transfer in Reinforcement Learning
Viaarxiv icon