Alert button
Picture for Naigang Wang

Naigang Wang

Alert button

Mitigating the Impact of Outlier Channels for Language Model Quantization with Activation Regularization

Add code
Bookmark button
Alert button
Apr 04, 2024
Aniruddha Nrusimha, Mayank Mishra, Naigang Wang, Dan Alistarh, Rameswar Panda, Yoon Kim

Viaarxiv icon

COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization

Add code
Bookmark button
Alert button
Mar 11, 2024
Aozhong Zhang, Zi Yang, Naigang Wang, Yingyong Qin, Jack Xin, Xin Li, Penghang Yin

Figure 1 for COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization
Figure 2 for COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization
Figure 3 for COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization
Figure 4 for COMQ: A Backpropagation-Free Algorithm for Post-Training Quantization
Viaarxiv icon

4-bit Quantization of LSTM-based Speech Recognition Models

Add code
Bookmark button
Alert button
Aug 27, 2021
Andrea Fasoli, Chia-Yu Chen, Mauricio Serrano, Xiao Sun, Naigang Wang, Swagath Venkataramani, George Saon, Xiaodong Cui, Brian Kingsbury, Wei Zhang, Zoltán Tüske, Kailash Gopalakrishnan

Figure 1 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 2 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 3 for 4-bit Quantization of LSTM-based Speech Recognition Models
Figure 4 for 4-bit Quantization of LSTM-based Speech Recognition Models
Viaarxiv icon

ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training

Add code
Bookmark button
Alert button
Apr 21, 2021
Chia-Yu Chen, Jiamin Ni, Songtao Lu, Xiaodong Cui, Pin-Yu Chen, Xiao Sun, Naigang Wang, Swagath Venkataramani, Vijayalakshmi Srinivasan, Wei Zhang, Kailash Gopalakrishnan

Figure 1 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 2 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 3 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Figure 4 for ScaleCom: Scalable Sparsified Gradient Compression for Communication-Efficient Distributed Training
Viaarxiv icon

All at Once Network Quantization via Collaborative Knowledge Transfer

Add code
Bookmark button
Alert button
Mar 02, 2021
Ximeng Sun, Rameswar Panda, Chun-Fu Chen, Naigang Wang, Bowen Pan Kailash Gopalakrishnan, Aude Oliva, Rogerio Feris, Kate Saenko

Figure 1 for All at Once Network Quantization via Collaborative Knowledge Transfer
Figure 2 for All at Once Network Quantization via Collaborative Knowledge Transfer
Figure 3 for All at Once Network Quantization via Collaborative Knowledge Transfer
Figure 4 for All at Once Network Quantization via Collaborative Knowledge Transfer
Viaarxiv icon

A Comprehensive Survey on Hardware-Aware Neural Architecture Search

Add code
Bookmark button
Alert button
Jan 22, 2021
Hadjer Benmeziane, Kaoutar El Maghraoui, Hamza Ouarnoughi, Smail Niar, Martin Wistuba, Naigang Wang

Figure 1 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Figure 2 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Figure 3 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Figure 4 for A Comprehensive Survey on Hardware-Aware Neural Architecture Search
Viaarxiv icon

Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks

Add code
Bookmark button
Alert button
Jan 19, 2019
Charbel Sakr, Naigang Wang, Chia-Yu Chen, Jungwook Choi, Ankur Agrawal, Naresh Shanbhag, Kailash Gopalakrishnan

Figure 1 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 2 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 3 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Figure 4 for Accumulation Bit-Width Scaling For Ultra-Low Precision Training Of Deep Networks
Viaarxiv icon

Training Deep Neural Networks with 8-bit Floating Point Numbers

Add code
Bookmark button
Alert button
Dec 19, 2018
Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, Kailash Gopalakrishnan

Figure 1 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 2 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 3 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Figure 4 for Training Deep Neural Networks with 8-bit Floating Point Numbers
Viaarxiv icon