Alert button
Picture for Shaojin Ding

Shaojin Ding

Alert button

USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models

Add code
Bookmark button
Alert button
Jan 03, 2024
Shaojin Ding, David Qiu, David Rim, Yanzhang He, Oleg Rybakov, Bo Li, Rohit Prabhavalkar, Weiran Wang, Tara N. Sainath, Shivani Agrawal, Zhonglin Han, Jian Li, Amir Yazdanbakhsh

Figure 1 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Figure 2 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Figure 3 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Figure 4 for USM-Lite: Quantization and Sparsity Aware Fine-tuning for Speech Recognition with Universal Speech Models
Viaarxiv icon

2-bit Conformer quantization for automatic speech recognition

Add code
Bookmark button
Alert button
May 26, 2023
Oleg Rybakov, Phoenix Meadowlark, Shaojin Ding, David Qiu, Jian Li, David Rim, Yanzhang He

Figure 1 for 2-bit Conformer quantization for automatic speech recognition
Figure 2 for 2-bit Conformer quantization for automatic speech recognition
Figure 3 for 2-bit Conformer quantization for automatic speech recognition
Figure 4 for 2-bit Conformer quantization for automatic speech recognition
Viaarxiv icon

RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models

Add code
Bookmark button
Alert button
May 24, 2023
David Qiu, David Rim, Shaojin Ding, Oleg Rybakov, Yanzhang He

Figure 1 for RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models
Figure 2 for RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models
Figure 3 for RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models
Figure 4 for RAND: Robustness Aware Norm Decay For Quantized Seq2seq Models
Viaarxiv icon

Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models

Add code
Bookmark button
Alert button
Mar 15, 2023
Steven M. Hernandez, Ding Zhao, Shaojin Ding, Antoine Bruguier, Rohit Prabhavalkar, Tara N. Sainath, Yanzhang He, Ian McGraw

Figure 1 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models
Figure 2 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models
Figure 3 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models
Figure 4 for Sharing Low Rank Conformer Weights for Tiny Always-On Ambient Speech Recognition Models
Viaarxiv icon

A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes

Add code
Bookmark button
Alert button
Apr 20, 2022
Shaojin Ding, Weiran Wang, Ding Zhao, Tara N. Sainath, Yanzhang He, Robert David, Rami Botros, Xin Wang, Rina Panigrahy, Qiao Liang, Dongseong Hwang, Ian McGraw, Rohit Prabhavalkar, Trevor Strohman

Figure 1 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 2 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 3 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Figure 4 for A Unified Cascaded Encoder ASR Model for Dynamic Model Sizes
Viaarxiv icon

Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition

Add code
Bookmark button
Alert button
Apr 13, 2022
Shaojin Ding, Rajeev Rikhye, Qiao Liang, Yanzhang He, Quan Wang, Arun Narayanan, Tom O'Malley, Ian McGraw

Figure 1 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Figure 2 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Figure 3 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Figure 4 for Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition
Viaarxiv icon

4-bit Conformer with Native Quantization Aware Training for Speech Recognition

Add code
Bookmark button
Alert button
Mar 29, 2022
Shaojin Ding, Phoenix Meadowlark, Yanzhang He, Lukasz Lew, Shivani Agrawal, Oleg Rybakov

Figure 1 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Figure 2 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Figure 3 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Figure 4 for 4-bit Conformer with Native Quantization Aware Training for Speech Recognition
Viaarxiv icon

Towards Lifelong Learning of Multilingual Text-To-Speech Synthesis

Add code
Bookmark button
Alert button
Oct 09, 2021
Mu Yang, Shaojin Ding, Tianlong Chen, Tong Wang, Zhangyang Wang

Figure 1 for Towards Lifelong Learning of Multilingual Text-To-Speech Synthesis
Figure 2 for Towards Lifelong Learning of Multilingual Text-To-Speech Synthesis
Figure 3 for Towards Lifelong Learning of Multilingual Text-To-Speech Synthesis
Figure 4 for Towards Lifelong Learning of Multilingual Text-To-Speech Synthesis
Viaarxiv icon