Alert button
Picture for Changsheng Zhao

Changsheng Zhao

Alert button

MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases

Add code
Bookmark button
Alert button
Feb 22, 2024
Zechun Liu, Changsheng Zhao, Forrest Iandola, Chen Lai, Yuandong Tian, Igor Fedorov, Yunyang Xiong, Ernie Chang, Yangyang Shi, Raghuraman Krishnamoorthi, Liangzhen Lai, Vikas Chandra

Viaarxiv icon

Not All Weights Are Created Equal: Enhancing Energy Efficiency in On-Device Streaming Speech Recognition

Add code
Bookmark button
Alert button
Feb 20, 2024
Yang Li, Yuan Shangguan, Yuhao Wang, Liangzhen Lai, Ernie Chang, Changsheng Zhao, Yangyang Shi, Vikas Chandra

Viaarxiv icon

On The Open Prompt Challenge In Conditional Audio Generation

Add code
Bookmark button
Alert button
Nov 01, 2023
Ernie Chang, Sidd Srinivasan, Mahi Luthra, Pin-Jie Lin, Varun Nagaraja, Forrest Iandola, Zechun Liu, Zhaoheng Ni, Changsheng Zhao, Yangyang Shi, Vikas Chandra

Viaarxiv icon

Revisiting Sample Size Determination in Natural Language Understanding

Add code
Bookmark button
Alert button
Jul 01, 2023
Ernie Chang, Muhammad Hassan Rashid, Pin-Jie Lin, Changsheng Zhao, Vera Demberg, Yangyang Shi, Vikas Chandra

Figure 1 for Revisiting Sample Size Determination in Natural Language Understanding
Figure 2 for Revisiting Sample Size Determination in Natural Language Understanding
Figure 3 for Revisiting Sample Size Determination in Natural Language Understanding
Figure 4 for Revisiting Sample Size Determination in Natural Language Understanding
Viaarxiv icon

LLM-QAT: Data-Free Quantization Aware Training for Large Language Models

Add code
Bookmark button
Alert button
May 29, 2023
Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, Vikas Chandra

Figure 1 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 2 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 3 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Figure 4 for LLM-QAT: Data-Free Quantization Aware Training for Large Language Models
Viaarxiv icon

Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding

Add code
Bookmark button
Alert button
Jan 05, 2022
Ting Hua, Yilin Shen, Changsheng Zhao, Yen-Chang Hsu, Hongxia Jin

Figure 1 for Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding
Figure 2 for Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding
Figure 3 for Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding
Figure 4 for Hyperparameter-free Continuous Learning for Domain Classification in Natural Language Understanding
Viaarxiv icon

Automatic Mixed-Precision Quantization Search of BERT

Add code
Bookmark button
Alert button
Dec 30, 2021
Changsheng Zhao, Ting Hua, Yilin Shen, Qian Lou, Hongxia Jin

Figure 1 for Automatic Mixed-Precision Quantization Search of BERT
Figure 2 for Automatic Mixed-Precision Quantization Search of BERT
Figure 3 for Automatic Mixed-Precision Quantization Search of BERT
Figure 4 for Automatic Mixed-Precision Quantization Search of BERT
Viaarxiv icon