Picture for Kevin Swersky

Kevin Swersky

University of Toronto

Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One

Add code
Dec 11, 2019
Figure 1 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Figure 2 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Figure 3 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Figure 4 for Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One
Viaarxiv icon

MIM: Mutual Information Machine

Add code
Oct 14, 2019
Figure 1 for MIM: Mutual Information Machine
Figure 2 for MIM: Mutual Information Machine
Figure 3 for MIM: Mutual Information Machine
Figure 4 for MIM: Mutual Information Machine
Viaarxiv icon

High Mutual Information in Representation Learning with Symmetric Variational Inference

Add code
Oct 04, 2019
Figure 1 for High Mutual Information in Representation Learning with Symmetric Variational Inference
Figure 2 for High Mutual Information in Representation Learning with Symmetric Variational Inference
Figure 3 for High Mutual Information in Representation Learning with Symmetric Variational Inference
Figure 4 for High Mutual Information in Representation Learning with Symmetric Variational Inference
Viaarxiv icon

Learning Execution through Neural Code Fusion

Add code
Jun 17, 2019
Figure 1 for Learning Execution through Neural Code Fusion
Figure 2 for Learning Execution through Neural Code Fusion
Figure 3 for Learning Execution through Neural Code Fusion
Figure 4 for Learning Execution through Neural Code Fusion
Viaarxiv icon

Flexibly Fair Representation Learning by Disentanglement

Add code
Jun 06, 2019
Figure 1 for Flexibly Fair Representation Learning by Disentanglement
Figure 2 for Flexibly Fair Representation Learning by Disentanglement
Figure 3 for Flexibly Fair Representation Learning by Disentanglement
Figure 4 for Flexibly Fair Representation Learning by Disentanglement
Viaarxiv icon

Learning Sparse Networks Using Targeted Dropout

Add code
Jun 05, 2019
Figure 1 for Learning Sparse Networks Using Targeted Dropout
Figure 2 for Learning Sparse Networks Using Targeted Dropout
Figure 3 for Learning Sparse Networks Using Targeted Dropout
Figure 4 for Learning Sparse Networks Using Targeted Dropout
Viaarxiv icon

Graph Normalizing Flows

Add code
May 30, 2019
Figure 1 for Graph Normalizing Flows
Figure 2 for Graph Normalizing Flows
Figure 3 for Graph Normalizing Flows
Figure 4 for Graph Normalizing Flows
Viaarxiv icon

Neural Networks for Modeling Source Code Edits

Add code
Apr 04, 2019
Figure 1 for Neural Networks for Modeling Source Code Edits
Figure 2 for Neural Networks for Modeling Source Code Edits
Figure 3 for Neural Networks for Modeling Source Code Edits
Figure 4 for Neural Networks for Modeling Source Code Edits
Viaarxiv icon

Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples

Add code
Mar 07, 2019
Figure 1 for Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Figure 2 for Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Figure 3 for Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Figure 4 for Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Viaarxiv icon

Learning Memory Access Patterns

Add code
Mar 06, 2018
Figure 1 for Learning Memory Access Patterns
Figure 2 for Learning Memory Access Patterns
Figure 3 for Learning Memory Access Patterns
Figure 4 for Learning Memory Access Patterns
Viaarxiv icon