Alert button
Picture for Bradley McDanel

Bradley McDanel

Alert button

Accelerating Vision Transformer Training via a Patch Sampling Schedule

Add code
Bookmark button
Alert button
Aug 19, 2022
Bradley McDanel, Chi Phuong Huynh

Figure 1 for Accelerating Vision Transformer Training via a Patch Sampling Schedule
Figure 2 for Accelerating Vision Transformer Training via a Patch Sampling Schedule
Figure 3 for Accelerating Vision Transformer Training via a Patch Sampling Schedule
Figure 4 for Accelerating Vision Transformer Training via a Patch Sampling Schedule
Viaarxiv icon

Accelerating DNN Training with Structured Data Gradient Pruning

Add code
Bookmark button
Alert button
Feb 01, 2022
Bradley McDanel, Helia Dinh, John Magallanes

Figure 1 for Accelerating DNN Training with Structured Data Gradient Pruning
Figure 2 for Accelerating DNN Training with Structured Data Gradient Pruning
Figure 3 for Accelerating DNN Training with Structured Data Gradient Pruning
Figure 4 for Accelerating DNN Training with Structured Data Gradient Pruning
Viaarxiv icon

FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding

Add code
Bookmark button
Alert button
Oct 28, 2021
Sai Qian Zhang, Bradley McDanel, H. T. Kung

Figure 1 for FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding
Figure 2 for FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding
Figure 3 for FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding
Figure 4 for FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding
Viaarxiv icon

Term Revealing: Furthering Quantization at Run Time on Quantized DNNs

Add code
Bookmark button
Alert button
Jul 26, 2020
H. T. Kung, Bradley McDanel, Sai Qian Zhang

Figure 1 for Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
Figure 2 for Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
Figure 3 for Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
Figure 4 for Term Revealing: Furthering Quantization at Run Time on Quantized DNNs
Viaarxiv icon

Full-stack Optimization for Accelerating CNNs with FPGA Validation

Add code
Bookmark button
Alert button
May 01, 2019
Bradley McDanel, Sai Qian Zhang, H. T. Kung, Xin Dong

Figure 1 for Full-stack Optimization for Accelerating CNNs with FPGA Validation
Figure 2 for Full-stack Optimization for Accelerating CNNs with FPGA Validation
Figure 3 for Full-stack Optimization for Accelerating CNNs with FPGA Validation
Figure 4 for Full-stack Optimization for Accelerating CNNs with FPGA Validation
Viaarxiv icon

Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization

Add code
Bookmark button
Alert button
Nov 07, 2018
H. T. Kung, Bradley McDanel, Sai Qian Zhang

Figure 1 for Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization
Figure 2 for Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization
Figure 3 for Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization
Figure 4 for Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization
Viaarxiv icon

Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference

Add code
Bookmark button
Alert button
Oct 21, 2017
Bradley McDanel, Surat Teerapittayanon, H. T. Kung

Figure 1 for Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference
Figure 2 for Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference
Figure 3 for Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference
Figure 4 for Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference
Viaarxiv icon

Embedded Binarized Neural Networks

Add code
Bookmark button
Alert button
Sep 06, 2017
Bradley McDanel, Surat Teerapittayanon, H. T. Kung

Figure 1 for Embedded Binarized Neural Networks
Figure 2 for Embedded Binarized Neural Networks
Figure 3 for Embedded Binarized Neural Networks
Figure 4 for Embedded Binarized Neural Networks
Viaarxiv icon

BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks

Add code
Bookmark button
Alert button
Sep 06, 2017
Surat Teerapittayanon, Bradley McDanel, H. T. Kung

Figure 1 for BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks
Figure 2 for BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks
Figure 3 for BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks
Figure 4 for BranchyNet: Fast Inference via Early Exiting from Deep Neural Networks
Viaarxiv icon