Alert button
Picture for Swagath Venkataramani

Swagath Venkataramani

Alert button

Enhance DNN Adversarial Robustness and Efficiency via Injecting Noise to Non-Essential Neurons

Add code
Bookmark button
Alert button
Feb 06, 2024
Zhenyu Liu, Garrett Gagnon, Swagath Venkataramani, Liu Liu

Viaarxiv icon

Approximate Computing and the Efficient Machine Learning Expedition

Add code
Bookmark button
Alert button
Oct 02, 2022
Jörg Henkel, Hai Li, Anand Raghunathan, Mehdi B. Tahoori, Swagath Venkataramani, Xiaoxuan Yang, Georgios Zervakis

Figure 1 for Approximate Computing and the Efficient Machine Learning Expedition
Figure 2 for Approximate Computing and the Efficient Machine Learning Expedition
Figure 3 for Approximate Computing and the Efficient Machine Learning Expedition
Figure 4 for Approximate Computing and the Efficient Machine Learning Expedition
Viaarxiv icon

Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization

Add code
Bookmark button
Alert button
Jun 16, 2022
Andrea Fasoli, Chia-Yu Chen, Mauricio Serrano, Swagath Venkataramani, George Saon, Xiaodong Cui, Brian Kingsbury, Kailash Gopalakrishnan

Figure 1 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Figure 2 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Figure 3 for Accelerating Inference and Language Model Fusion of Recurrent Neural Network Transducers via End-to-End 4-bit Quantization
Viaarxiv icon

4-bit Quantization of LSTM-based Speech Recognition Models

Add code
Bookmark button
Alert button
Aug 27, 2021
Andrea Fasoli, Chia-Yu Chen, Mauricio Serrano, Xiao Sun, Naigang Wang, Swagath Venkataramani, George Saon, Xiaodong Cui, Brian Kingsbury, Wei Zhang, Zoltán Tüske, Kailash Gopalakrishnan

Figure 1 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 2 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 3 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 4 for 4-bit Quantization of LSTM-based Speech Recognition Models
Viaarxiv icon

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

Add code
Bookmark button
Alert button
Apr 21, 2021
Chia-Yu Chen, Jiamin Ni, Songtao Lu, Xiaodong Cui, Pin-Yu Chen, Xiao Sun, Naigang Wang, Swagath Venkataramani, Vijayalakshmi Srinivasan, Wei Zhang, Kailash Gopalakrishnan

Figure 1 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 2 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 3 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 4 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Viaarxiv icon

Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)

Add code
Bookmark button
Alert button
Jul 17, 2018
Jungwook Choi, Pierce I-Jen Chuang, Zhuo Wang, Swagath Venkataramani, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan

Figure 1 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Figure 2 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Figure 3 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Figure 4 for Bridging the Accuracy Gap for 2-bit Quantized Neural Networks (QNN)
Viaarxiv icon

PACT: Parameterized Clipping Activation for Quantized Neural Networks

Add code
Bookmark button
Alert button
Jul 17, 2018
Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, Kailash Gopalakrishnan

Figure 1 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Figure 2 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Figure 3 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Figure 4 for PACT: Parameterized Clipping Activation for Quantized Neural Networks
Viaarxiv icon

SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks

Add code
Bookmark button
Alert button
Nov 29, 2017
Sanchari Sen, Shubham Jain, Swagath Venkataramani, Anand Raghunathan

Figure 1 for SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks
Figure 2 for SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks
Figure 3 for SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks
Figure 4 for SparCE: Sparsity aware General Purpose Core Extensions to Accelerate Deep Neural Networks
Viaarxiv icon

DyVEDeep: Dynamic Variable Effort Deep Neural Networks

Add code
Bookmark button
Alert button
Apr 04, 2017
Sanjay Ganapathy, Swagath Venkataramani, Balaraman Ravindran, Anand Raghunathan

Figure 1 for DyVEDeep: Dynamic Variable Effort Deep Neural Networks
Figure 2 for DyVEDeep: Dynamic Variable Effort Deep Neural Networks
Figure 3 for DyVEDeep: Dynamic Variable Effort Deep Neural Networks
Figure 4 for DyVEDeep: Dynamic Variable Effort Deep Neural Networks
Viaarxiv icon

Energy-Efficient Object Detection using Semantic Decomposition

Add code
Bookmark button
Alert button
Sep 20, 2016
Priyadarshini Panda, Swagath Venkataramani, Abhronil Sengupta, Anand Raghunathan, Kaushik Roy

Figure 1 for Energy-Efficient Object Detection using Semantic Decomposition
Figure 2 for Energy-Efficient Object Detection using Semantic Decomposition
Figure 3 for Energy-Efficient Object Detection using Semantic Decomposition
Figure 4 for Energy-Efficient Object Detection using Semantic Decomposition
Viaarxiv icon