Alert button
Picture for Giancarlo Kerg

Giancarlo Kerg

Alert button

On Neural Architecture Inductive Biases for Relational Tasks

Add code
Bookmark button
Alert button
Jun 09, 2022
Giancarlo Kerg, Sarthak Mittal, David Rolnick, Yoshua Bengio, Blake Richards, Guillaume Lajoie

Figure 1 for On Neural Architecture Inductive Biases for Relational Tasks
Figure 2 for On Neural Architecture Inductive Biases for Relational Tasks
Figure 3 for On Neural Architecture Inductive Biases for Relational Tasks
Figure 4 for On Neural Architecture Inductive Biases for Relational Tasks
Viaarxiv icon

Continuous-Time Meta-Learning with Forward Mode Differentiation

Add code
Bookmark button
Alert button
Mar 02, 2022
Tristan Deleu, David Kanaa, Leo Feng, Giancarlo Kerg, Yoshua Bengio, Guillaume Lajoie, Pierre-Luc Bacon

Figure 1 for Continuous-Time Meta-Learning with Forward Mode Differentiation
Figure 2 for Continuous-Time Meta-Learning with Forward Mode Differentiation
Figure 3 for Continuous-Time Meta-Learning with Forward Mode Differentiation
Figure 4 for Continuous-Time Meta-Learning with Forward Mode Differentiation
Viaarxiv icon

Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization

Add code
Bookmark button
Alert button
Dec 28, 2020
Stanislaw Jastrzebski, Devansh Arpit, Oliver Astrand, Giancarlo Kerg, Huan Wang, Caiming Xiong, Richard Socher, Kyunghyun Cho, Krzysztof Geras

Figure 1 for Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Figure 2 for Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Figure 3 for Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Figure 4 for Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization
Viaarxiv icon

Advantages of biologically-inspired adaptive neural activation in RNNs during learning

Add code
Bookmark button
Alert button
Jun 22, 2020
Victor Geadah, Giancarlo Kerg, Stefan Horoi, Guy Wolf, Guillaume Lajoie

Figure 1 for Advantages of biologically-inspired adaptive neural activation in RNNs during learning
Figure 2 for Advantages of biologically-inspired adaptive neural activation in RNNs during learning
Figure 3 for Advantages of biologically-inspired adaptive neural activation in RNNs during learning
Figure 4 for Advantages of biologically-inspired adaptive neural activation in RNNs during learning
Viaarxiv icon

Untangling tradeoffs between recurrence and self-attention in neural networks

Add code
Bookmark button
Alert button
Jun 16, 2020
Giancarlo Kerg, Bhargav Kanuparthi, Anirudh Goyal, Kyle Goyette, Yoshua Bengio, Guillaume Lajoie

Figure 1 for Untangling tradeoffs between recurrence and self-attention in neural networks
Figure 2 for Untangling tradeoffs between recurrence and self-attention in neural networks
Figure 3 for Untangling tradeoffs between recurrence and self-attention in neural networks
Figure 4 for Untangling tradeoffs between recurrence and self-attention in neural networks
Viaarxiv icon

Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics

Add code
Bookmark button
Alert button
May 28, 2019
Giancarlo Kerg, Kyle Goyette, Maximilian Puelma Touzel, Gauthier Gidel, Eugene Vorontsov, Yoshua Bengio, Guillaume Lajoie

Figure 1 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Figure 2 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Figure 3 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Figure 4 for Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics
Viaarxiv icon