Alert button
Picture for Yan Duan

Yan Duan

Alert button

Variable Skipping for Autoregressive Range Density Estimation

Add code
Bookmark button
Alert button
Jul 10, 2020
Eric Liang, Zongheng Yang, Ion Stoica, Pieter Abbeel, Yan Duan, Xi Chen

Figure 1 for Variable Skipping for Autoregressive Range Density Estimation
Figure 2 for Variable Skipping for Autoregressive Range Density Estimation
Figure 3 for Variable Skipping for Autoregressive Range Density Estimation
Figure 4 for Variable Skipping for Autoregressive Range Density Estimation
Viaarxiv icon

NeuroCard: One Cardinality Estimator for All Tables

Add code
Bookmark button
Alert button
Jun 15, 2020
Zongheng Yang, Amog Kamsetty, Sifei Luan, Eric Liang, Yan Duan, Xi Chen, Ion Stoica

Figure 1 for NeuroCard: One Cardinality Estimator for All Tables
Figure 2 for NeuroCard: One Cardinality Estimator for All Tables
Figure 3 for NeuroCard: One Cardinality Estimator for All Tables
Figure 4 for NeuroCard: One Cardinality Estimator for All Tables
Viaarxiv icon

Evaluating Protein Transfer Learning with TAPE

Add code
Bookmark button
Alert button
Jun 19, 2019
Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Xi Chen, John Canny, Pieter Abbeel, Yun S. Song

Figure 1 for Evaluating Protein Transfer Learning with TAPE
Figure 2 for Evaluating Protein Transfer Learning with TAPE
Figure 3 for Evaluating Protein Transfer Learning with TAPE
Figure 4 for Evaluating Protein Transfer Learning with TAPE
Viaarxiv icon

Selectivity Estimation with Deep Likelihood Models

Add code
Bookmark button
Alert button
May 10, 2019
Zongheng Yang, Eric Liang, Amog Kamsetty, Chenggang Wu, Yan Duan, Xi Chen, Pieter Abbeel, Joseph M. Hellerstein, Sanjay Krishnan, Ion Stoica

Figure 1 for Selectivity Estimation with Deep Likelihood Models
Figure 2 for Selectivity Estimation with Deep Likelihood Models
Figure 3 for Selectivity Estimation with Deep Likelihood Models
Figure 4 for Selectivity Estimation with Deep Likelihood Models
Viaarxiv icon

Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design

Add code
Bookmark button
Alert button
Feb 01, 2019
Jonathan Ho, Xi Chen, Aravind Srinivas, Yan Duan, Pieter Abbeel

Figure 1 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Figure 2 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Figure 3 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Figure 4 for Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
Viaarxiv icon

Model-Ensemble Trust-Region Policy Optimization

Add code
Bookmark button
Alert button
Oct 05, 2018
Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, Pieter Abbeel

Figure 1 for Model-Ensemble Trust-Region Policy Optimization
Figure 2 for Model-Ensemble Trust-Region Policy Optimization
Figure 3 for Model-Ensemble Trust-Region Policy Optimization
Figure 4 for Model-Ensemble Trust-Region Policy Optimization
Viaarxiv icon

Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines

Add code
Bookmark button
Alert button
Mar 20, 2018
Cathy Wu, Aravind Rajeswaran, Yan Duan, Vikash Kumar, Alexandre M Bayen, Sham Kakade, Igor Mordatch, Pieter Abbeel

Figure 1 for Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines
Figure 2 for Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines
Figure 3 for Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines
Figure 4 for Variance Reduction for Policy Gradient with Action-Dependent Factorized Baselines
Viaarxiv icon

Some Considerations on Learning to Explore via Meta-Reinforcement Learning

Add code
Bookmark button
Alert button
Mar 03, 2018
Bradly C. Stadie, Ge Yang, Rein Houthooft, Xi Chen, Yan Duan, Yuhuai Wu, Pieter Abbeel, Ilya Sutskever

Figure 1 for Some Considerations on Learning to Explore via Meta-Reinforcement Learning
Figure 2 for Some Considerations on Learning to Explore via Meta-Reinforcement Learning
Figure 3 for Some Considerations on Learning to Explore via Meta-Reinforcement Learning
Figure 4 for Some Considerations on Learning to Explore via Meta-Reinforcement Learning
Viaarxiv icon

#Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning

Add code
Bookmark button
Alert button
Dec 05, 2017
Haoran Tang, Rein Houthooft, Davis Foote, Adam Stooke, Xi Chen, Yan Duan, John Schulman, Filip De Turck, Pieter Abbeel

Figure 1 for #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Figure 2 for #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Figure 3 for #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Figure 4 for #Exploration: A Study of Count-Based Exploration for Deep Reinforcement Learning
Viaarxiv icon

One-Shot Imitation Learning

Add code
Bookmark button
Alert button
Dec 04, 2017
Yan Duan, Marcin Andrychowicz, Bradly C. Stadie, Jonathan Ho, Jonas Schneider, Ilya Sutskever, Pieter Abbeel, Wojciech Zaremba

Figure 1 for One-Shot Imitation Learning
Figure 2 for One-Shot Imitation Learning
Figure 3 for One-Shot Imitation Learning
Figure 4 for One-Shot Imitation Learning
Viaarxiv icon