Picture for Naveen Mellempudi

Naveen Mellempudi

Exploring FPGA designs for MX and beyond

Add code
Jul 01, 2024
Figure 1 for Exploring FPGA designs for MX and beyond
Figure 2 for Exploring FPGA designs for MX and beyond
Figure 3 for Exploring FPGA designs for MX and beyond
Figure 4 for Exploring FPGA designs for MX and beyond
Viaarxiv icon

Efficient Post-training Quantization with FP8 Formats

Add code
Sep 26, 2023
Figure 1 for Efficient Post-training Quantization with FP8 Formats
Figure 2 for Efficient Post-training Quantization with FP8 Formats
Figure 3 for Efficient Post-training Quantization with FP8 Formats
Figure 4 for Efficient Post-training Quantization with FP8 Formats
Viaarxiv icon

FP8 Formats for Deep Learning

Add code
Sep 12, 2022
Figure 1 for FP8 Formats for Deep Learning
Figure 2 for FP8 Formats for Deep Learning
Figure 3 for FP8 Formats for Deep Learning
Figure 4 for FP8 Formats for Deep Learning
Viaarxiv icon

High Performance Scalable FPGA Accelerator for Deep Neural Networks

Add code
Aug 29, 2019
Figure 1 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 2 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 3 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Figure 4 for High Performance Scalable FPGA Accelerator for Deep Neural Networks
Viaarxiv icon

A Study of BFLOAT16 for Deep Learning Training

Add code
Jun 13, 2019
Figure 1 for A Study of BFLOAT16 for Deep Learning Training
Figure 2 for A Study of BFLOAT16 for Deep Learning Training
Figure 3 for A Study of BFLOAT16 for Deep Learning Training
Figure 4 for A Study of BFLOAT16 for Deep Learning Training
Viaarxiv icon

Mixed Precision Training With 8-bit Floating Point

Add code
May 29, 2019
Figure 1 for Mixed Precision Training With 8-bit Floating Point
Figure 2 for Mixed Precision Training With 8-bit Floating Point
Figure 3 for Mixed Precision Training With 8-bit Floating Point
Figure 4 for Mixed Precision Training With 8-bit Floating Point
Viaarxiv icon

Mixed Precision Training of Convolutional Neural Networks using Integer Operations

Add code
Feb 23, 2018
Figure 1 for Mixed Precision Training of Convolutional Neural Networks using Integer Operations
Figure 2 for Mixed Precision Training of Convolutional Neural Networks using Integer Operations
Figure 3 for Mixed Precision Training of Convolutional Neural Networks using Integer Operations
Figure 4 for Mixed Precision Training of Convolutional Neural Networks using Integer Operations
Viaarxiv icon

On Scale-out Deep Learning Training for Cloud and HPC

Add code
Jan 24, 2018
Figure 1 for On Scale-out Deep Learning Training for Cloud and HPC
Figure 2 for On Scale-out Deep Learning Training for Cloud and HPC
Viaarxiv icon

Ternary Residual Networks

Add code
Oct 31, 2017
Figure 1 for Ternary Residual Networks
Figure 2 for Ternary Residual Networks
Figure 3 for Ternary Residual Networks
Figure 4 for Ternary Residual Networks
Viaarxiv icon

Ternary Neural Networks with Fine-Grained Quantization

Add code
May 30, 2017
Figure 1 for Ternary Neural Networks with Fine-Grained Quantization
Figure 2 for Ternary Neural Networks with Fine-Grained Quantization
Figure 3 for Ternary Neural Networks with Fine-Grained Quantization
Figure 4 for Ternary Neural Networks with Fine-Grained Quantization
Viaarxiv icon