Picture for Satinder Singh

Satinder Singh

Learning Independently-Obtainable Reward Functions

Add code
Jan 31, 2019
Figure 1 for Learning Independently-Obtainable Reward Functions
Figure 2 for Learning Independently-Obtainable Reward Functions
Figure 3 for Learning Independently-Obtainable Reward Functions
Figure 4 for Learning Independently-Obtainable Reward Functions
Viaarxiv icon

Generative Adversarial Self-Imitation Learning

Add code
Dec 03, 2018
Figure 1 for Generative Adversarial Self-Imitation Learning
Figure 2 for Generative Adversarial Self-Imitation Learning
Figure 3 for Generative Adversarial Self-Imitation Learning
Figure 4 for Generative Adversarial Self-Imitation Learning
Viaarxiv icon

Learning End-to-End Goal-Oriented Dialog with Multiple Answers

Add code
Aug 24, 2018
Figure 1 for Learning End-to-End Goal-Oriented Dialog with Multiple Answers
Figure 2 for Learning End-to-End Goal-Oriented Dialog with Multiple Answers
Figure 3 for Learning End-to-End Goal-Oriented Dialog with Multiple Answers
Figure 4 for Learning End-to-End Goal-Oriented Dialog with Multiple Answers
Viaarxiv icon

Many-Goals Reinforcement Learning

Add code
Jun 22, 2018
Figure 1 for Many-Goals Reinforcement Learning
Figure 2 for Many-Goals Reinforcement Learning
Figure 3 for Many-Goals Reinforcement Learning
Figure 4 for Many-Goals Reinforcement Learning
Viaarxiv icon

On Learning Intrinsic Rewards for Policy Gradient Methods

Add code
Jun 22, 2018
Figure 1 for On Learning Intrinsic Rewards for Policy Gradient Methods
Figure 2 for On Learning Intrinsic Rewards for Policy Gradient Methods
Figure 3 for On Learning Intrinsic Rewards for Policy Gradient Methods
Figure 4 for On Learning Intrinsic Rewards for Policy Gradient Methods
Viaarxiv icon

Self-Imitation Learning

Add code
Jun 14, 2018
Figure 1 for Self-Imitation Learning
Figure 2 for Self-Imitation Learning
Figure 3 for Self-Imitation Learning
Figure 4 for Self-Imitation Learning
Viaarxiv icon

Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities

Add code
Apr 22, 2018
Figure 1 for Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities
Figure 2 for Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities
Figure 3 for Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities
Figure 4 for Named Entities troubling your Neural Methods? Build NE-Table: A neural approach for handling Named Entities
Viaarxiv icon

The Advantage of Doubling: A Deep Reinforcement Learning Approach to Studying the Double Team in the NBA

Add code
Mar 08, 2018
Figure 1 for The Advantage of Doubling: A Deep Reinforcement Learning Approach to Studying the Double Team in the NBA
Figure 2 for The Advantage of Doubling: A Deep Reinforcement Learning Approach to Studying the Double Team in the NBA
Figure 3 for The Advantage of Doubling: A Deep Reinforcement Learning Approach to Studying the Double Team in the NBA
Figure 4 for The Advantage of Doubling: A Deep Reinforcement Learning Approach to Studying the Double Team in the NBA
Viaarxiv icon

Markov Decision Processes with Continuous Side Information

Add code
Nov 15, 2017
Figure 1 for Markov Decision Processes with Continuous Side Information
Viaarxiv icon

Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

Add code
Nov 07, 2017
Figure 1 for Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Figure 2 for Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Figure 3 for Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Figure 4 for Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning
Viaarxiv icon