Alert button
Picture for Abhisek Kundu

Abhisek Kundu

Alert button

AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

Add code
Bookmark button
Alert button
Apr 14, 2023
Abhisek Kundu, Naveen K. Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey

Figure 1 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Figure 2 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Figure 3 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Figure 4 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Viaarxiv icon

Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads

Add code
Bookmark button
Alert button
Apr 14, 2021
Evangelos Georganas, Dhiraj Kalamkar, Sasikanth Avancha, Menachem Adelman, Cristina Anderson, Alexander Breuer, Narendra Chaudhary, Abhisek Kundu, Vasimuddin Md, Sanchit Misra, Ramanarayan Mohanty, Hans Pabst, Barukh Ziv, Alexander Heinecke

Figure 1 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 2 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 3 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Figure 4 for Tensor Processing Primitives: A Programming Abstraction for Efficiency and Portability in Deep Learning Workloads
Viaarxiv icon

K-TanH: Hardware Efficient Activations For Deep Learning

Add code
Bookmark button
Alert button
Oct 21, 2019
Abhisek Kundu, Sudarshan Srinivasan, Eric C. Qin, Dhiraj Kalamkar, Naveen K. Mellempudi, Dipankar Das, Kunal Banerjee, Bharat Kaul, Pradeep Dubey

Figure 1 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 2 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 3 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 4 for K-TanH: Hardware Efficient Activations For Deep Learning
Viaarxiv icon

A Study of BFLOAT16 for Deep Learning Training

Add code
Bookmark button
Alert button
Jun 13, 2019
Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, Jiyan Yang, Jongsoo Park, Alexander Heinecke, Evangelos Georganas, Sudarshan Srinivasan, Abhisek Kundu, Misha Smelyanskiy, Bharat Kaul, Pradeep Dubey

Figure 1 for A Study of BFLOAT16 for Deep Learning Training
Figure 2 for A Study of BFLOAT16 for Deep Learning Training
Figure 3 for A Study of BFLOAT16 for Deep Learning Training
Figure 4 for A Study of BFLOAT16 for Deep Learning Training
Viaarxiv icon

Ternary Residual Networks

Add code
Bookmark button
Alert button
Oct 31, 2017
Abhisek Kundu, Kunal Banerjee, Naveen Mellempudi, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, Pradeep Dubey

Figure 1 for Ternary Residual Networks
Figure 2 for Ternary Residual Networks
Figure 3 for Ternary Residual Networks
Figure 4 for Ternary Residual Networks
Viaarxiv icon

Ternary Neural Networks with Fine-Grained Quantization

Add code
Bookmark button
Alert button
May 30, 2017
Naveen Mellempudi, Abhisek Kundu, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, Pradeep Dubey

Figure 1 for Ternary Neural Networks with Fine-Grained Quantization
Figure 2 for Ternary Neural Networks with Fine-Grained Quantization
Figure 3 for Ternary Neural Networks with Fine-Grained Quantization
Figure 4 for Ternary Neural Networks with Fine-Grained Quantization
Viaarxiv icon