Picture for Ping Tak Peter Tang

Ping Tak Peter Tang

Amy

Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale

Add code
May 26, 2021
Figure 1 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Figure 2 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Figure 3 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Figure 4 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Viaarxiv icon

Mixed-Precision Embedding Using a Cache

Add code
Oct 23, 2020
Figure 1 for Mixed-Precision Embedding Using a Cache
Figure 2 for Mixed-Precision Embedding Using a Cache
Figure 3 for Mixed-Precision Embedding Using a Cache
Figure 4 for Mixed-Precision Embedding Using a Cache
Viaarxiv icon

Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism

Add code
Oct 18, 2020
Figure 1 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 2 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 3 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 4 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Viaarxiv icon

A Progressive Batching L-BFGS Method for Machine Learning

Add code
May 30, 2018
Figure 1 for A Progressive Batching L-BFGS Method for Machine Learning
Figure 2 for A Progressive Batching L-BFGS Method for Machine Learning
Figure 3 for A Progressive Batching L-BFGS Method for Machine Learning
Figure 4 for A Progressive Batching L-BFGS Method for Machine Learning
Viaarxiv icon

Dictionary Learning by Dynamical Neural Networks

Add code
May 23, 2018
Figure 1 for Dictionary Learning by Dynamical Neural Networks
Figure 2 for Dictionary Learning by Dynamical Neural Networks
Figure 3 for Dictionary Learning by Dynamical Neural Networks
Figure 4 for Dictionary Learning by Dynamical Neural Networks
Viaarxiv icon

Enabling Sparse Winograd Convolution by Native Pruning

Add code
Oct 13, 2017
Figure 1 for Enabling Sparse Winograd Convolution by Native Pruning
Figure 2 for Enabling Sparse Winograd Convolution by Native Pruning
Viaarxiv icon

Faster CNNs with Direct Sparse Convolutions and Guided Pruning

Add code
Jul 28, 2017
Figure 1 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Figure 2 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Figure 3 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Figure 4 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Viaarxiv icon

Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results

Add code
May 15, 2017
Figure 1 for Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results
Figure 2 for Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results
Figure 3 for Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results
Viaarxiv icon

On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

Add code
Feb 09, 2017
Figure 1 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Figure 2 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Figure 3 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Figure 4 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Viaarxiv icon