Picture for Devansh Arpit

Devansh Arpit

A Walk with SGD

Add code
May 30, 2018
Figure 1 for A Walk with SGD
Figure 2 for A Walk with SGD
Figure 3 for A Walk with SGD
Figure 4 for A Walk with SGD
Viaarxiv icon

Fraternal Dropout

Add code
Mar 28, 2018
Figure 1 for Fraternal Dropout
Figure 2 for Fraternal Dropout
Figure 3 for Fraternal Dropout
Figure 4 for Fraternal Dropout
Viaarxiv icon

Residual Connections Encourage Iterative Inference

Add code
Mar 08, 2018
Figure 1 for Residual Connections Encourage Iterative Inference
Figure 2 for Residual Connections Encourage Iterative Inference
Figure 3 for Residual Connections Encourage Iterative Inference
Figure 4 for Residual Connections Encourage Iterative Inference
Viaarxiv icon

Variational Bi-LSTMs

Add code
Nov 15, 2017
Figure 1 for Variational Bi-LSTMs
Figure 2 for Variational Bi-LSTMs
Figure 3 for Variational Bi-LSTMs
Figure 4 for Variational Bi-LSTMs
Viaarxiv icon

On Optimality Conditions for Auto-Encoder Signal Recovery

Add code
Jul 13, 2017
Figure 1 for On Optimality Conditions for Auto-Encoder Signal Recovery
Figure 2 for On Optimality Conditions for Auto-Encoder Signal Recovery
Figure 3 for On Optimality Conditions for Auto-Encoder Signal Recovery
Figure 4 for On Optimality Conditions for Auto-Encoder Signal Recovery
Viaarxiv icon

A Closer Look at Memorization in Deep Networks

Add code
Jul 01, 2017
Figure 1 for A Closer Look at Memorization in Deep Networks
Figure 2 for A Closer Look at Memorization in Deep Networks
Figure 3 for A Closer Look at Memorization in Deep Networks
Figure 4 for A Closer Look at Memorization in Deep Networks
Viaarxiv icon

Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks

Add code
Jul 12, 2016
Figure 1 for Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
Figure 2 for Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
Figure 3 for Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
Figure 4 for Normalization Propagation: A Parametric Technique for Removing Internal Covariate Shift in Deep Networks
Viaarxiv icon

Why Regularized Auto-Encoders learn Sparse Representation?

Add code
Jun 17, 2016
Figure 1 for Why Regularized Auto-Encoders learn Sparse Representation?
Figure 2 for Why Regularized Auto-Encoders learn Sparse Representation?
Figure 3 for Why Regularized Auto-Encoders learn Sparse Representation?
Figure 4 for Why Regularized Auto-Encoders learn Sparse Representation?
Viaarxiv icon

Dimensionality Reduction with Subspace Structure Preservation

Add code
Apr 06, 2016
Figure 1 for Dimensionality Reduction with Subspace Structure Preservation
Figure 2 for Dimensionality Reduction with Subspace Structure Preservation
Figure 3 for Dimensionality Reduction with Subspace Structure Preservation
Viaarxiv icon

Is Joint Training Better for Deep Auto-Encoders?

Add code
Jun 15, 2015
Figure 1 for Is Joint Training Better for Deep Auto-Encoders?
Figure 2 for Is Joint Training Better for Deep Auto-Encoders?
Figure 3 for Is Joint Training Better for Deep Auto-Encoders?
Figure 4 for Is Joint Training Better for Deep Auto-Encoders?
Viaarxiv icon