Alert button
Picture for Mahmut T Kandemir

Mahmut T Kandemir

Alert button

Exploiting Activation based Gradient Output Sparsity to Accelerate Backpropagation in CNNs

Add code
Bookmark button
Alert button
Sep 16, 2021
Anup Sarma, Sonali Singh, Huaipan Jiang, Ashutosh Pattnaik, Asit K Mishra, Vijaykrishnan Narayanan, Mahmut T Kandemir, Chita R Das

Figure 1 for Exploiting Activation based Gradient Output Sparsity to Accelerate Backpropagation in CNNs
Figure 2 for Exploiting Activation based Gradient Output Sparsity to Accelerate Backpropagation in CNNs
Figure 3 for Exploiting Activation based Gradient Output Sparsity to Accelerate Backpropagation in CNNs
Figure 4 for Exploiting Activation based Gradient Output Sparsity to Accelerate Backpropagation in CNNs
Viaarxiv icon

Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training

Add code
Bookmark button
Alert button
Jun 22, 2021
Anup Sarma, Sonali Singh, Huaipan Jiang, Rui Zhang, Mahmut T Kandemir, Chita R Das

Figure 1 for Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training
Figure 2 for Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training
Figure 3 for Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training
Figure 4 for Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training
Viaarxiv icon