Alert button
Picture for Bharat Kaul

Bharat Kaul

Alert button

AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks

Add code
Bookmark button
Alert button
Apr 14, 2023
Abhisek Kundu, Naveen K. Mellempudi, Dharma Teja Vooturi, Bharat Kaul, Pradeep Dubey

Figure 1 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Figure 2 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Figure 3 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Figure 4 for AUTOSPARSE: Towards Automated Sparse Training of Deep Neural Networks
Viaarxiv icon

Efficient and Generic 1D Dilated Convolution Layer for Deep Learning

Add code
Bookmark button
Alert button
Apr 16, 2021
Narendra Chaudhary, Sanchit Misra, Dhiraj Kalamkar, Alexander Heinecke, Evangelos Georganas, Barukh Ziv, Menachem Adelman, Bharat Kaul

Figure 1 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 2 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 3 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Figure 4 for Efficient and Generic 1D Dilated Convolution Layer for Deep Learning
Viaarxiv icon

MADRaS : Multi Agent Driving Simulator

Add code
Bookmark button
Alert button
Oct 02, 2020
Anirban Santara, Sohan Rudra, Sree Aditya Buridi, Meha Kaushik, Abhishek Naik, Bharat Kaul, Balaraman Ravindran

Figure 1 for MADRaS : Multi Agent Driving Simulator
Figure 2 for MADRaS : Multi Agent Driving Simulator
Figure 3 for MADRaS : Multi Agent Driving Simulator
Figure 4 for MADRaS : Multi Agent Driving Simulator
Viaarxiv icon

PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives

Add code
Bookmark button
Alert button
Jun 02, 2020
Sanket Tavarageri, Alexander Heinecke, Sasikanth Avancha, Gagandeep Goyal, Ramakrishna Upadrasta, Bharat Kaul

Figure 1 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 2 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 3 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Figure 4 for PolyDL: Polyhedral Optimizations for Creation of High Performance DL primitives
Viaarxiv icon

PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives

Add code
Bookmark button
Alert button
Feb 06, 2020
Sanket Tavarageri, Alexander Heinecke, Sasikanth Avancha, Gagandeep Goyal, Ramakrishna Upadrasta, Bharat Kaul

Figure 1 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 2 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 3 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Figure 4 for PolyScientist: Automatic Loop Transformations Combined with Microkernels for Optimization of Deep Learning Primitives
Viaarxiv icon

SEERL: Sample Efficient Ensemble Reinforcement Learning

Add code
Bookmark button
Alert button
Jan 15, 2020
Rohan Saphal, Balaraman Ravindran, Dheevatsa Mudigere, Sasikanth Avancha, Bharat Kaul

Figure 1 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 2 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 3 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Figure 4 for SEERL: Sample Efficient Ensemble Reinforcement Learning
Viaarxiv icon

K-TanH: Hardware Efficient Activations For Deep Learning

Add code
Bookmark button
Alert button
Oct 21, 2019
Abhisek Kundu, Sudarshan Srinivasan, Eric C. Qin, Dhiraj Kalamkar, Naveen K. Mellempudi, Dipankar Das, Kunal Banerjee, Bharat Kaul, Pradeep Dubey

Figure 1 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 2 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 3 for K-TanH: Hardware Efficient Activations For Deep Learning
Figure 4 for K-TanH: Hardware Efficient Activations For Deep Learning
Viaarxiv icon

High Performance Scalable FPGA Accelerator for Deep Neural Networks

Add code
Bookmark button
Alert button
Aug 29, 2019
Sudarshan Srinivasan, Pradeep Janedula, Saurabh Dhoble, Sasikanth Avancha, Dipankar Das, Naveen Mellempudi, Bharat Daga, Martin Langhammer, Gregg Baeckler, Bharat Kaul

Figure 1 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 2 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 3 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 4 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Viaarxiv icon

A Study of BFLOAT16 for Deep Learning Training

Add code
Bookmark button
Alert button
Jun 13, 2019
Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, Jiyan Yang, Jongsoo Park, Alexander Heinecke, Evangelos Georganas, Sudarshan Srinivasan, Abhisek Kundu, Misha Smelyanskiy, Bharat Kaul, Pradeep Dubey

Figure 1 for A Study of BFLOAT16 for Deep Learning Training
Figure 2 for A Study of BFLOAT16 for Deep Learning Training
Figure 3 for A Study of BFLOAT16 for Deep Learning Training
Figure 4 for A Study of BFLOAT16 for Deep Learning Training
Viaarxiv icon