Alert button
Picture for Ping Tak Peter Tang

Ping Tak Peter Tang

Alert button

Amy

Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale

Add code
Bookmark button
Alert button
May 26, 2021
Zhaoxia, Deng, Jongsoo Park, Ping Tak Peter Tang, Haixin Liu, Jie, Yang, Hector Yuen, Jianyu Huang, Daya Khudia, Xiaohan Wei, Ellie Wen, Dhruv Choudhary, Raghuraman Krishnamoorthi, Carole-Jean Wu, Satish Nadathur, Changkyu Kim, Maxim Naumov, Sam Naghshineh, Mikhail Smelyanskiy

Figure 1 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Figure 2 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Figure 3 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Figure 4 for Low-Precision Hardware Architectures Meet Recommendation Model Inference at Scale
Viaarxiv icon

Mixed-Precision Embedding Using a Cache

Add code
Bookmark button
Alert button
Oct 23, 2020
Jie Amy Yang, Jianyu Huang, Jongsoo Park, Ping Tak Peter Tang, Andrew Tulloch

Figure 1 for Mixed-Precision Embedding Using a Cache
Figure 2 for Mixed-Precision Embedding Using a Cache
Figure 3 for Mixed-Precision Embedding Using a Cache
Figure 4 for Mixed-Precision Embedding Using a Cache
Viaarxiv icon

Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism

Add code
Bookmark button
Alert button
Oct 18, 2020
Vipul Gupta, Dhruv Choudhary, Ping Tak Peter Tang, Xiaohan Wei, Xing Wang, Yuzhen Huang, Arun Kejariwal, Kannan Ramchandran, Michael W. Mahoney

Figure 1 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 2 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 3 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Figure 4 for Fast Distributed Training of Deep Neural Networks: Dynamic Communication Thresholding for Model and Data Parallelism
Viaarxiv icon

A Progressive Batching L-BFGS Method for Machine Learning

Add code
Bookmark button
Alert button
May 30, 2018
Raghu Bollapragada, Dheevatsa Mudigere, Jorge Nocedal, Hao-Jun Michael Shi, Ping Tak Peter Tang

Figure 1 for A Progressive Batching L-BFGS Method for Machine Learning
Figure 2 for A Progressive Batching L-BFGS Method for Machine Learning
Figure 3 for A Progressive Batching L-BFGS Method for Machine Learning
Figure 4 for A Progressive Batching L-BFGS Method for Machine Learning
Viaarxiv icon

Dictionary Learning by Dynamical Neural Networks

Add code
Bookmark button
Alert button
May 23, 2018
Tsung-Han Lin, Ping Tak Peter Tang

Figure 1 for Dictionary Learning by Dynamical Neural Networks
Figure 2 for Dictionary Learning by Dynamical Neural Networks
Figure 3 for Dictionary Learning by Dynamical Neural Networks
Figure 4 for Dictionary Learning by Dynamical Neural Networks
Viaarxiv icon

Enabling Sparse Winograd Convolution by Native Pruning

Add code
Bookmark button
Alert button
Oct 13, 2017
Sheng Li, Jongsoo Park, Ping Tak Peter Tang

Figure 1 for Enabling Sparse Winograd Convolution by Native Pruning
Figure 2 for Enabling Sparse Winograd Convolution by Native Pruning
Viaarxiv icon

Faster CNNs with Direct Sparse Convolutions and Guided Pruning

Add code
Bookmark button
Alert button
Jul 28, 2017
Jongsoo Park, Sheng Li, Wei Wen, Ping Tak Peter Tang, Hai Li, Yiran Chen, Pradeep Dubey

Figure 1 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Figure 2 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Figure 3 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Figure 4 for Faster CNNs with Direct Sparse Convolutions and Guided Pruning
Viaarxiv icon

Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results

Add code
Bookmark button
Alert button
May 15, 2017
Ping Tak Peter Tang, Tsung-Han Lin, Mike Davies

Figure 1 for Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results
Figure 2 for Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results
Figure 3 for Sparse Coding by Spiking Neural Networks: Convergence Theory and Computational Results
Viaarxiv icon

On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima

Add code
Bookmark button
Alert button
Feb 09, 2017
Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, Ping Tak Peter Tang

Figure 1 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Figure 2 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Figure 3 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Figure 4 for On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima
Viaarxiv icon