Picture for Honglak Lee

Honglak Lee

University of Michigan, Ann Arbor

Similarity of Neural Network Representations Revisited

Add code
May 14, 2019
Figure 1 for Similarity of Neural Network Representations Revisited
Figure 2 for Similarity of Neural Network Representations Revisited
Figure 3 for Similarity of Neural Network Representations Revisited
Figure 4 for Similarity of Neural Network Representations Revisited
Viaarxiv icon

Incremental Learning with Unlabeled Data in the Wild

Add code
Mar 29, 2019
Figure 1 for Incremental Learning with Unlabeled Data in the Wild
Figure 2 for Incremental Learning with Unlabeled Data in the Wild
Figure 3 for Incremental Learning with Unlabeled Data in the Wild
Figure 4 for Incremental Learning with Unlabeled Data in the Wild
Viaarxiv icon

Robust Inference via Generative Classifiers for Handling Noisy Labels

Add code
Jan 31, 2019
Figure 1 for Robust Inference via Generative Classifiers for Handling Noisy Labels
Figure 2 for Robust Inference via Generative Classifiers for Handling Noisy Labels
Figure 3 for Robust Inference via Generative Classifiers for Handling Noisy Labels
Figure 4 for Robust Inference via Generative Classifiers for Handling Noisy Labels
Viaarxiv icon

Diversity-Sensitive Conditional Generative Adversarial Networks

Add code
Jan 25, 2019
Figure 1 for Diversity-Sensitive Conditional Generative Adversarial Networks
Figure 2 for Diversity-Sensitive Conditional Generative Adversarial Networks
Figure 3 for Diversity-Sensitive Conditional Generative Adversarial Networks
Figure 4 for Diversity-Sensitive Conditional Generative Adversarial Networks
Viaarxiv icon

Learning Latent Dynamics for Planning from Pixels

Add code
Dec 03, 2018
Figure 1 for Learning Latent Dynamics for Planning from Pixels
Figure 2 for Learning Latent Dynamics for Planning from Pixels
Figure 3 for Learning Latent Dynamics for Planning from Pixels
Figure 4 for Learning Latent Dynamics for Planning from Pixels
Viaarxiv icon

Generative Adversarial Self-Imitation Learning

Add code
Dec 03, 2018
Figure 1 for Generative Adversarial Self-Imitation Learning
Figure 2 for Generative Adversarial Self-Imitation Learning
Figure 3 for Generative Adversarial Self-Imitation Learning
Figure 4 for Generative Adversarial Self-Imitation Learning
Viaarxiv icon

Contingency-Aware Exploration in Reinforcement Learning

Add code
Nov 05, 2018
Figure 1 for Contingency-Aware Exploration in Reinforcement Learning
Figure 2 for Contingency-Aware Exploration in Reinforcement Learning
Figure 3 for Contingency-Aware Exploration in Reinforcement Learning
Figure 4 for Contingency-Aware Exploration in Reinforcement Learning
Viaarxiv icon

Content preserving text generation with attribute controls

Add code
Nov 03, 2018
Figure 1 for Content preserving text generation with attribute controls
Figure 2 for Content preserving text generation with attribute controls
Figure 3 for Content preserving text generation with attribute controls
Figure 4 for Content preserving text generation with attribute controls
Viaarxiv icon

Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies

Add code
Nov 02, 2018
Figure 1 for Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies
Figure 2 for Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies
Figure 3 for Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies
Figure 4 for Hierarchical Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies
Viaarxiv icon

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

Add code
Oct 27, 2018
Figure 1 for A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Figure 2 for A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Figure 3 for A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Figure 4 for A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Viaarxiv icon