Picture for Sylvain Gelly

Sylvain Gelly

INRIA Futurs

On Self Modulation for Generative Adversarial Networks

Add code
Oct 02, 2018
Figure 1 for On Self Modulation for Generative Adversarial Networks
Figure 2 for On Self Modulation for Generative Adversarial Networks
Figure 3 for On Self Modulation for Generative Adversarial Networks
Figure 4 for On Self Modulation for Generative Adversarial Networks
Viaarxiv icon

Clustering Meets Implicit Generative Models

Add code
Aug 02, 2018
Figure 1 for Clustering Meets Implicit Generative Models
Figure 2 for Clustering Meets Implicit Generative Models
Figure 3 for Clustering Meets Implicit Generative Models
Figure 4 for Clustering Meets Implicit Generative Models
Viaarxiv icon

Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem

Add code
Jul 09, 2018
Figure 1 for Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem
Figure 2 for Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem
Figure 3 for Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem
Figure 4 for Temporal Difference Learning with Neural Networks - Study of the Leakage Propagation Problem
Viaarxiv icon

On Accurate Evaluation of GANs for Language Generation

Add code
Jun 14, 2018
Figure 1 for On Accurate Evaluation of GANs for Language Generation
Figure 2 for On Accurate Evaluation of GANs for Language Generation
Figure 3 for On Accurate Evaluation of GANs for Language Generation
Figure 4 for On Accurate Evaluation of GANs for Language Generation
Viaarxiv icon

MemGEN: Memory is All You Need

Add code
Mar 29, 2018
Figure 1 for MemGEN: Memory is All You Need
Figure 2 for MemGEN: Memory is All You Need
Figure 3 for MemGEN: Memory is All You Need
Figure 4 for MemGEN: Memory is All You Need
Viaarxiv icon

Gradient Descent Quantizes ReLU Network Features

Add code
Mar 22, 2018
Figure 1 for Gradient Descent Quantizes ReLU Network Features
Figure 2 for Gradient Descent Quantizes ReLU Network Features
Figure 3 for Gradient Descent Quantizes ReLU Network Features
Figure 4 for Gradient Descent Quantizes ReLU Network Features
Viaarxiv icon

Wasserstein Auto-Encoders

Add code
Mar 12, 2018
Figure 1 for Wasserstein Auto-Encoders
Figure 2 for Wasserstein Auto-Encoders
Figure 3 for Wasserstein Auto-Encoders
Figure 4 for Wasserstein Auto-Encoders
Viaarxiv icon

Toward Optimal Run Racing: Application to Deep Learning Calibration

Add code
Jun 20, 2017
Figure 1 for Toward Optimal Run Racing: Application to Deep Learning Calibration
Figure 2 for Toward Optimal Run Racing: Application to Deep Learning Calibration
Figure 3 for Toward Optimal Run Racing: Application to Deep Learning Calibration
Figure 4 for Toward Optimal Run Racing: Application to Deep Learning Calibration
Viaarxiv icon

Critical Hyper-Parameters: No Random, No Cry

Add code
Jun 10, 2017
Figure 1 for Critical Hyper-Parameters: No Random, No Cry
Figure 2 for Critical Hyper-Parameters: No Random, No Cry
Figure 3 for Critical Hyper-Parameters: No Random, No Cry
Figure 4 for Critical Hyper-Parameters: No Random, No Cry
Viaarxiv icon

Better Text Understanding Through Image-To-Text Transfer

Add code
May 26, 2017
Figure 1 for Better Text Understanding Through Image-To-Text Transfer
Figure 2 for Better Text Understanding Through Image-To-Text Transfer
Figure 3 for Better Text Understanding Through Image-To-Text Transfer
Figure 4 for Better Text Understanding Through Image-To-Text Transfer
Viaarxiv icon