Alert button
Picture for Roland Memisevic

Roland Memisevic

Alert button

Architectural Complexity Measures of Recurrent Neural Networks

Add code
Bookmark button
Alert button
Nov 12, 2016
Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov, Yoshua Bengio

Figure 1 for Architectural Complexity Measures of Recurrent Neural Networks
Figure 2 for Architectural Complexity Measures of Recurrent Neural Networks
Figure 3 for Architectural Complexity Measures of Recurrent Neural Networks
Figure 4 for Architectural Complexity Measures of Recurrent Neural Networks
Viaarxiv icon

Theano: A Python framework for fast computation of mathematical expressions

Add code
Bookmark button
Alert button
May 09, 2016
The Theano Development Team, Rami Al-Rfou, Guillaume Alain, Amjad Almahairi, Christof Angermueller, Dzmitry Bahdanau, Nicolas Ballas, Frédéric Bastien, Justin Bayer, Anatoly Belikov, Alexander Belopolsky, Yoshua Bengio, Arnaud Bergeron, James Bergstra, Valentin Bisson, Josh Bleecher Snyder, Nicolas Bouchard, Nicolas Boulanger-Lewandowski, Xavier Bouthillier, Alexandre de Brébisson, Olivier Breuleux, Pierre-Luc Carrier, Kyunghyun Cho, Jan Chorowski, Paul Christiano, Tim Cooijmans, Marc-Alexandre Côté, Myriam Côté, Aaron Courville, Yann N. Dauphin, Olivier Delalleau, Julien Demouth, Guillaume Desjardins, Sander Dieleman, Laurent Dinh, Mélanie Ducoffe, Vincent Dumoulin, Samira Ebrahimi Kahou, Dumitru Erhan, Ziye Fan, Orhan Firat, Mathieu Germain, Xavier Glorot, Ian Goodfellow, Matt Graham, Caglar Gulcehre, Philippe Hamel, Iban Harlouchet, Jean-Philippe Heng, Balázs Hidasi, Sina Honari, Arjun Jain, Sébastien Jean, Kai Jia, Mikhail Korobov, Vivek Kulkarni, Alex Lamb, Pascal Lamblin, Eric Larsen, César Laurent, Sean Lee, Simon Lefrancois, Simon Lemieux, Nicholas Léonard, Zhouhan Lin, Jesse A. Livezey, Cory Lorenz, Jeremiah Lowin, Qianli Ma, Pierre-Antoine Manzagol, Olivier Mastropietro, Robert T. McGibbon, Roland Memisevic, Bart van Merriënboer, Vincent Michalski, Mehdi Mirza, Alberto Orlandi, Christopher Pal, Razvan Pascanu, Mohammad Pezeshki, Colin Raffel, Daniel Renshaw, Matthew Rocklin, Adriana Romero, Markus Roth, Peter Sadowski, John Salvatier, François Savard, Jan Schlüter, John Schulman, Gabriel Schwartz, Iulian Vlad Serban, Dmitriy Serdyuk, Samira Shabanian, Étienne Simon, Sigurd Spieckermann, S. Ramana Subramanyam, Jakub Sygnowski, Jérémie Tanguay, Gijs van Tulder, Joseph Turian, Sebastian Urban, Pascal Vincent, Francesco Visin, Harm de Vries, David Warde-Farley, Dustin J. Webb, Matthew Willson, Kelvin Xu, Lijun Xue, Li Yao, Saizheng Zhang, Ying Zhang

Figure 1 for Theano: A Python framework for fast computation of mathematical expressions
Figure 2 for Theano: A Python framework for fast computation of mathematical expressions
Figure 3 for Theano: A Python framework for fast computation of mathematical expressions
Figure 4 for Theano: A Python framework for fast computation of mathematical expressions
Viaarxiv icon

RATM: Recurrent Attentive Tracking Model

Add code
Bookmark button
Alert button
Apr 28, 2016
Samira Ebrahimi Kahou, Vincent Michalski, Roland Memisevic

Figure 1 for RATM: Recurrent Attentive Tracking Model
Viaarxiv icon

Regularizing RNNs by Stabilizing Activations

Add code
Bookmark button
Alert button
Apr 26, 2016
David Krueger, Roland Memisevic

Figure 1 for Regularizing RNNs by Stabilizing Activations
Figure 2 for Regularizing RNNs by Stabilizing Activations
Figure 3 for Regularizing RNNs by Stabilizing Activations
Figure 4 for Regularizing RNNs by Stabilizing Activations
Viaarxiv icon

Neural Networks with Few Multiplications

Add code
Bookmark button
Alert button
Feb 26, 2016
Zhouhan Lin, Matthieu Courbariaux, Roland Memisevic, Yoshua Bengio

Figure 1 for Neural Networks with Few Multiplications
Figure 2 for Neural Networks with Few Multiplications
Figure 3 for Neural Networks with Few Multiplications
Figure 4 for Neural Networks with Few Multiplications
Viaarxiv icon

Dropout as data augmentation

Add code
Bookmark button
Alert button
Jan 08, 2016
Xavier Bouthillier, Kishore Konda, Pascal Vincent, Roland Memisevic

Figure 1 for Dropout as data augmentation
Figure 2 for Dropout as data augmentation
Figure 3 for Dropout as data augmentation
Figure 4 for Dropout as data augmentation
Viaarxiv icon

Denoising Criterion for Variational Auto-Encoding Framework

Add code
Bookmark button
Alert button
Jan 04, 2016
Daniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, Yoshua Bengio

Figure 1 for Denoising Criterion for Variational Auto-Encoding Framework
Figure 2 for Denoising Criterion for Variational Auto-Encoding Framework
Figure 3 for Denoising Criterion for Variational Auto-Encoding Framework
Figure 4 for Denoising Criterion for Variational Auto-Encoding Framework
Viaarxiv icon

How far can we go without convolution: Improving fully-connected networks

Add code
Bookmark button
Alert button
Nov 09, 2015
Zhouhan Lin, Roland Memisevic, Kishore Konda

Figure 1 for How far can we go without convolution: Improving fully-connected networks
Figure 2 for How far can we go without convolution: Improving fully-connected networks
Figure 3 for How far can we go without convolution: Improving fully-connected networks
Figure 4 for How far can we go without convolution: Improving fully-connected networks
Viaarxiv icon

Conservativeness of untied auto-encoders

Add code
Bookmark button
Alert button
Sep 21, 2015
Daniel Jiwoong Im, Mohamed Ishmael Diwan Belghazi, Roland Memisevic

Figure 1 for Conservativeness of untied auto-encoders
Figure 2 for Conservativeness of untied auto-encoders
Figure 3 for Conservativeness of untied auto-encoders
Figure 4 for Conservativeness of untied auto-encoders
Viaarxiv icon

Zero-bias autoencoders and the benefits of co-adapting features

Add code
Bookmark button
Alert button
Apr 08, 2015
Kishore Konda, Roland Memisevic, David Krueger

Figure 1 for Zero-bias autoencoders and the benefits of co-adapting features
Figure 2 for Zero-bias autoencoders and the benefits of co-adapting features
Figure 3 for Zero-bias autoencoders and the benefits of co-adapting features
Figure 4 for Zero-bias autoencoders and the benefits of co-adapting features
Viaarxiv icon