Alert button
Picture for Sudarshan Srinivasan

Sudarshan Srinivasan

Alert button

K-TanH: Hardware Efficient Activations For Deep Learning

Add code
Bookmark button
Alert button
Oct 21, 2019
Abhisek Kundu, Sudarshan Srinivasan, Eric C. Qin, Dhiraj Kalamkar, Naveen K. Mellempudi, Dipankar Das, Kunal Banerjee, Bharat Kaul, Pradeep Dubey

Figure 1 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 2 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 3 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 4 for K-TanH: Hardware Efficient Activations For Deep Learning
Viaarxiv icon

High Performance Scalable FPGA Accelerator for Deep Neural Networks

Add code
Bookmark button
Alert button
Aug 29, 2019
Sudarshan Srinivasan, Pradeep Janedula, Saurabh Dhoble, Sasikanth Avancha, Dipankar Das, Naveen Mellempudi, Bharat Daga, Martin Langhammer, Gregg Baeckler, Bharat Kaul

Figure 1 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 2 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 3 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 4 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Viaarxiv icon

A Study of BFLOAT16 for Deep Learning Training

Add code
Bookmark button
Alert button
Jun 13, 2019
Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, Jiyan Yang, Jongsoo Park, Alexander Heinecke, Evangelos Georganas, Sudarshan Srinivasan, Abhisek Kundu, Misha Smelyanskiy, Bharat Kaul, Pradeep Dubey

Figure 1 for A Study of BFLOAT16 for Deep Learning Training
Figure 2 for A Study of BFLOAT16 for Deep Learning Training
Figure 3 for A Study of BFLOAT16 for Deep Learning Training
Figure 4 for A Study of BFLOAT16 for Deep Learning Training
Viaarxiv icon

Mixed Precision Training With 8-bit Floating Point

Add code
Bookmark button
Alert button
May 29, 2019
Naveen Mellempudi, Sudarshan Srinivasan, Dipankar Das, Bharat Kaul

Figure 1 for Mixed Precision Training With 8-bit Floating Point
Figure 2 for Mixed Precision Training With 8-bit Floating Point
Figure 3 for Mixed Precision Training With 8-bit Floating Point
Figure 4 for Mixed Precision Training With 8-bit Floating Point
Viaarxiv icon