Alert button
Picture for Hanlin Tang

Hanlin Tang

Alert button

EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs

Add code
Bookmark button
Alert button
Mar 05, 2024
Hanlin Tang, Yifu Sun, Decheng Wu, Kai Liu, Jianchen Zhu, Zhanhui Kang

Figure 1 for EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs
Figure 2 for EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs
Figure 3 for EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs
Figure 4 for EasyQuant: An Efficient Data-free Quantization Algorithm for LLMs
Viaarxiv icon

MKQ-BERT: Quantized BERT with 4-bits Weights and Activations

Add code
Bookmark button
Alert button
Mar 25, 2022
Hanlin Tang, Xipeng Zhang, Kai Liu, Jianchen Zhu, Zhanhui Kang

Figure 1 for MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Figure 2 for MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Figure 3 for MKQ-BERT: Quantized BERT with 4-bits Weights and Activations
Viaarxiv icon

PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic

Add code
Bookmark button
Alert button
Aug 20, 2021
Weicong Ding, Hanlin Tang, Jingshuo Feng, Lei Yuan, Sen Yang, Guangxu Yang, Jie Zheng, Jing Wang, Qiang Su, Dong Zheng, Xuezhong Qiu, Yongqi Liu, Yuxuan Chen, Yang Liu, Chao Song, Dongying Kong, Kai Ren, Peng Jiang, Qiao Lian, Ji Liu

Figure 1 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic
Figure 2 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic
Figure 3 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic
Figure 4 for PASTO: Strategic Parameter Optimization in Recommendation Systems -- Probabilistic is Better than Deterministic
Viaarxiv icon

On the geometry of generalization and memorization in deep neural networks

Add code
Bookmark button
Alert button
May 30, 2021
Cory Stephenson, Suchismita Padhy, Abhinav Ganesh, Yue Hui, Hanlin Tang, SueYeon Chung

Figure 1 for On the geometry of generalization and memorization in deep neural networks
Figure 2 for On the geometry of generalization and memorization in deep neural networks
Figure 3 for On the geometry of generalization and memorization in deep neural networks
Figure 4 for On the geometry of generalization and memorization in deep neural networks
Viaarxiv icon

Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models

Add code
Bookmark button
Alert button
Apr 15, 2021
Matteo Alleman, Jonathan Mamou, Miguel A Del Rio, Hanlin Tang, Yoon Kim, SueYeon Chung

Figure 1 for Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Figure 2 for Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Figure 3 for Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Figure 4 for Syntactic Perturbations Reveal Representational Correlates of Hierarchical Phrase Structure in Pretrained Language Models
Viaarxiv icon

1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed

Add code
Bookmark button
Alert button
Apr 13, 2021
Conglong Li, Ammar Ahmad Awan, Hanlin Tang, Samyam Rajbhandari, Yuxiong He

Figure 1 for 1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
Figure 2 for 1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
Figure 3 for 1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
Figure 4 for 1-bit LAMB: Communication Efficient Large-Scale Large-Batch Training with LAMB's Convergence Speed
Viaarxiv icon

1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed

Add code
Bookmark button
Alert button
Feb 04, 2021
Hanlin Tang, Shaoduo Gan, Ammar Ahmad Awan, Samyam Rajbhandari, Conglong Li, Xiangru Lian, Ji Liu, Ce Zhang, Yuxiong He

Figure 1 for 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Figure 2 for 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Figure 3 for 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Figure 4 for 1-bit Adam: Communication Efficient Large-Scale Training with Adam's Convergence Speed
Viaarxiv icon

APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm

Add code
Bookmark button
Alert button
Aug 28, 2020
Hanlin Tang, Shaoduo Gan, Samyam Rajbhandari, Xiangru Lian, Ji Liu, Yuxiong He, Ce Zhang

Figure 1 for APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
Figure 2 for APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
Figure 3 for APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
Figure 4 for APMSqueeze: A Communication Efficient Adam-Preconditioned Momentum SGD Algorithm
Viaarxiv icon

Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning

Add code
Bookmark button
Alert button
Jul 14, 2020
Shauharda Khadka, Estelle Aflalo, Mattias Marder, Avrech Ben-David, Santiago Miret, Hanlin Tang, Shie Mannor, Tamir Hazan, Somdeb Majumdar

Figure 1 for Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
Figure 2 for Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
Figure 3 for Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
Figure 4 for Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning
Viaarxiv icon