Alert button
Picture for Rathinakumar Appuswamy

Rathinakumar Appuswamy

Alert button

Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference

Add code
Bookmark button
Alert button
Jan 30, 2023
Deepika Bablani, Jeffrey L. Mckinstry, Steven K. Esser, Rathinakumar Appuswamy, Dharmendra S. Modha

Figure 1 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Figure 2 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Figure 3 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Figure 4 for Efficient and Effective Methods for Mixed Precision Neural Network Quantization for Faster, Energy-efficient Inference
Viaarxiv icon

Learned Step Size Quantization

Add code
Bookmark button
Alert button
Feb 21, 2019
Steven K. Esser, Jeffrey L. McKinstry, Deepika Bablani, Rathinakumar Appuswamy, Dharmendra S. Modha

Figure 1 for Learned Step Size Quantization
Figure 2 for Learned Step Size Quantization
Figure 3 for Learned Step Size Quantization
Figure 4 for Learned Step Size Quantization
Viaarxiv icon

Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference

Add code
Bookmark button
Alert button
Sep 11, 2018
Jeffrey L. McKinstry, Steven K. Esser, Rathinakumar Appuswamy, Deepika Bablani, John V. Arthur, Izzet B. Yildiz, Dharmendra S. Modha

Figure 1 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Figure 2 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Figure 3 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Figure 4 for Discovering Low-Precision Networks Close to Full-Precision Networks for Efficient Embedded Inference
Viaarxiv icon

Structured Convolution Matrices for Energy-efficient Deep learning

Add code
Bookmark button
Alert button
Jun 08, 2016
Rathinakumar Appuswamy, Tapan Nayak, John Arthur, Steven Esser, Paul Merolla, Jeffrey Mckinstry, Timothy Melano, Myron Flickner, Dharmendra Modha

Figure 1 for Structured Convolution Matrices for Energy-efficient Deep learning
Figure 2 for Structured Convolution Matrices for Energy-efficient Deep learning
Figure 3 for Structured Convolution Matrices for Energy-efficient Deep learning
Figure 4 for Structured Convolution Matrices for Energy-efficient Deep learning
Viaarxiv icon

Deep neural networks are robust to weight binarization and other non-linear distortions

Add code
Bookmark button
Alert button
Jun 07, 2016
Paul Merolla, Rathinakumar Appuswamy, John Arthur, Steve K. Esser, Dharmendra Modha

Figure 1 for Deep neural networks are robust to weight binarization and other non-linear distortions
Figure 2 for Deep neural networks are robust to weight binarization and other non-linear distortions
Figure 3 for Deep neural networks are robust to weight binarization and other non-linear distortions
Figure 4 for Deep neural networks are robust to weight binarization and other non-linear distortions
Viaarxiv icon

Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing

Add code
Bookmark button
Alert button
May 24, 2016
Steven K. Esser, Paul A. Merolla, John V. Arthur, Andrew S. Cassidy, Rathinakumar Appuswamy, Alexander Andreopoulos, David J. Berg, Jeffrey L. McKinstry, Timothy Melano, Davis R. Barch, Carmelo di Nolfo, Pallab Datta, Arnon Amir, Brian Taba, Myron D. Flickner, Dharmendra S. Modha

Figure 1 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Figure 2 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Figure 3 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Figure 4 for Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Viaarxiv icon