Alert button
Picture for Yoshua Bengio

Yoshua Bengio

Alert button

DIRO

Learning Powerful Policies by Using Consistent Dynamics Model

Add code
Bookmark button
Alert button
Jun 11, 2019
Shagun Sodhani, Anirudh Goyal, Tristan Deleu, Yoshua Bengio, Sergey Levine, Jian Tang

Figure 1 for Learning Powerful Policies by Using Consistent Dynamics Model
Figure 2 for Learning Powerful Policies by Using Consistent Dynamics Model
Figure 3 for Learning Powerful Policies by Using Consistent Dynamics Model
Figure 4 for Learning Powerful Policies by Using Consistent Dynamics Model
Viaarxiv icon

Tackling Climate Change with Machine Learning

Add code
Bookmark button
Alert button
Jun 10, 2019
David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, Yoshua Bengio

Figure 1 for Tackling Climate Change with Machine Learning
Viaarxiv icon

How to Initialize your Network? Robust Initialization for WeightNorm & ResNets

Add code
Bookmark button
Alert button
Jun 05, 2019
Devansh Arpit, Victor Campos, Yoshua Bengio

Figure 1 for How to Initialize your Network? Robust Initialization for WeightNorm & ResNets
Figure 2 for How to Initialize your Network? Robust Initialization for WeightNorm & ResNets
Figure 3 for How to Initialize your Network? Robust Initialization for WeightNorm & ResNets
Figure 4 for How to Initialize your Network? Robust Initialization for WeightNorm & ResNets
Viaarxiv icon

Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study

Add code
Bookmark button
Alert button
Jun 04, 2019
Chinnadhurai Sankar, Sandeep Subramanian, Christopher Pal, Sarath Chandar, Yoshua Bengio

Figure 1 for Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Figure 2 for Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Figure 3 for Do Neural Dialog Systems Use the Conversation History Effectively? An Empirical Study
Viaarxiv icon

Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input

Add code
Bookmark button
Alert button
May 31, 2019
Maxence Ernoult, Julie Grollier, Damien Querlioz, Yoshua Bengio, Benjamin Scellier

Figure 1 for Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input
Figure 2 for Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input
Figure 3 for Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input
Figure 4 for Updates of Equilibrium Prop Match Gradients of Backprop Through Time in an RNN with Static Input
Viaarxiv icon

Attention Based Pruning for Shift Networks

Add code
Bookmark button
Alert button
May 29, 2019
Ghouthi Boukli Hacene, Carlos Lassance, Vincent Gripon, Matthieu Courbariaux, Yoshua Bengio

Figure 1 for Attention Based Pruning for Shift Networks
Figure 2 for Attention Based Pruning for Shift Networks
Figure 3 for Attention Based Pruning for Shift Networks
Figure 4 for Attention Based Pruning for Shift Networks
Viaarxiv icon

Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics

Add code
Bookmark button
Alert button
May 28, 2019
Giancarlo Kerg, Kyle Goyette, Maximilian Puelma Touzel, Gauthier Gidel, Eugene Vorontsov, Yoshua Bengio, Guillaume Lajoie

Figure 1 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Figure 2 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Figure 3 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Figure 4 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Viaarxiv icon

N-BEATS: Neural basis expansion analysis for interpretable time series forecasting

Add code
Bookmark button
Alert button
May 28, 2019
Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, Yoshua Bengio

Figure 1 for N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
Figure 2 for N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
Figure 3 for N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
Figure 4 for N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
Viaarxiv icon

State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations

Add code
Bookmark button
Alert button
May 26, 2019
Alex Lamb, Jonathan Binas, Anirudh Goyal, Sandeep Subramanian, Ioannis Mitliagkas, Denis Kazakov, Yoshua Bengio, Michael C. Mozer

Figure 1 for State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Figure 2 for State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Figure 3 for State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Figure 4 for State-Reification Networks: Improving Generalization by Modeling the Distribution of Hidden Representations
Viaarxiv icon

Compositional generalization in a deep seq2seq model by separating syntax and semantics

Add code
Bookmark button
Alert button
May 23, 2019
Jake Russin, Jason Jo, Randall C. O'Reilly, Yoshua Bengio

Figure 1 for Compositional generalization in a deep seq2seq model by separating syntax and semantics
Figure 2 for Compositional generalization in a deep seq2seq model by separating syntax and semantics
Figure 3 for Compositional generalization in a deep seq2seq model by separating syntax and semantics
Figure 4 for Compositional generalization in a deep seq2seq model by separating syntax and semantics
Viaarxiv icon