Alert button
Picture for Qiujia Li

Qiujia Li

Alert button

Handling Ambiguity in Emotion: From Out-of-Domain Detection to Distribution Estimation

Feb 20, 2024
Wen Wu, Bo Li, Chao Zhang, Chung-Cheng Chiu, Qiujia Li, Junwen Bai, Tara N. Sainath, Philip C. Woodland

Viaarxiv icon

Efficient Adapter Finetuning for Tail Languages in Streaming Multilingual ASR

Jan 17, 2024
Junwen Bai, Bo Li, Qiujia Li, Tara N. Sainath, Trevor Strohman

Viaarxiv icon

Massive End-to-end Models for Short Search Queries

Sep 22, 2023
Weiran Wang, Rohit Prabhavalkar, Dongseong Hwang, Qiujia Li, Khe Chai Sim, Bo Li, James Qin, Xingyu Cai, Adam Stooke, Zhong Meng, CJ Zheng, Yanzhang He, Tara Sainath, Pedro Moreno Mengibar

Figure 1 for Massive End-to-end Models for Short Search Queries
Figure 2 for Massive End-to-end Models for Short Search Queries
Figure 3 for Massive End-to-end Models for Short Search Queries
Figure 4 for Massive End-to-end Models for Short Search Queries
Viaarxiv icon

Modular Domain Adaptation for Conformer-Based Streaming ASR

May 22, 2023
Qiujia Li, Bo Li, Dongseong Hwang, Tara N. Sainath, Pedro M. Mengibar

Figure 1 for Modular Domain Adaptation for Conformer-Based Streaming ASR
Figure 2 for Modular Domain Adaptation for Conformer-Based Streaming ASR
Figure 3 for Modular Domain Adaptation for Conformer-Based Streaming ASR
Figure 4 for Modular Domain Adaptation for Conformer-Based Streaming ASR
Viaarxiv icon

Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition

Mar 20, 2023
Xiaoyu Yang, Qiujia Li, Chao Zhang, Philip C. Woodland

Figure 1 for Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition
Figure 2 for Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition
Figure 3 for Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition
Figure 4 for Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition
Viaarxiv icon

Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models

Oct 07, 2021
Xiaoyu Yang, Qiujia Li, Philip C. Woodland

Figure 1 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models
Figure 2 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models
Figure 3 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models
Figure 4 for Knowledge Distillation for Neural Transducers from Large Self-Supervised Pre-trained Models
Viaarxiv icon

Improving Confidence Estimation on Out-of-Domain Data for End-to-End Speech Recognition

Oct 07, 2021
Qiujia Li, Yu Zhang, David Qiu, Yanzhang He, Liangliang Cao, Philip C. Woodland

Figure 1 for Improving Confidence Estimation on Out-of-Domain Data for End-to-End Speech Recognition
Figure 2 for Improving Confidence Estimation on Out-of-Domain Data for End-to-End Speech Recognition
Figure 3 for Improving Confidence Estimation on Out-of-Domain Data for End-to-End Speech Recognition
Figure 4 for Improving Confidence Estimation on Out-of-Domain Data for End-to-End Speech Recognition
Viaarxiv icon

Combining Frame-Synchronous and Label-Synchronous Systems for Speech Recognition

Jul 01, 2021
Qiujia Li, Chao Zhang, Philip C. Woodland

Figure 1 for Combining Frame-Synchronous and Label-Synchronous Systems for Speech Recognition
Figure 2 for Combining Frame-Synchronous and Label-Synchronous Systems for Speech Recognition
Figure 3 for Combining Frame-Synchronous and Label-Synchronous Systems for Speech Recognition
Figure 4 for Combining Frame-Synchronous and Label-Synchronous Systems for Speech Recognition
Viaarxiv icon

Multi-Task Learning for End-to-End ASR Word and Utterance Confidence with Deletion Prediction

Apr 26, 2021
David Qiu, Yanzhang He, Qiujia Li, Yu Zhang, Liangliang Cao, Ian McGraw

Figure 1 for Multi-Task Learning for End-to-End ASR Word and Utterance Confidence with Deletion Prediction
Figure 2 for Multi-Task Learning for End-to-End ASR Word and Utterance Confidence with Deletion Prediction
Figure 3 for Multi-Task Learning for End-to-End ASR Word and Utterance Confidence with Deletion Prediction
Figure 4 for Multi-Task Learning for End-to-End ASR Word and Utterance Confidence with Deletion Prediction
Viaarxiv icon

Residual Energy-Based Models for End-to-End Speech Recognition

Mar 25, 2021
Qiujia Li, Yu Zhang, Bo Li, Liangliang Cao, Philip C. Woodland

Figure 1 for Residual Energy-Based Models for End-to-End Speech Recognition
Figure 2 for Residual Energy-Based Models for End-to-End Speech Recognition
Figure 3 for Residual Energy-Based Models for End-to-End Speech Recognition
Figure 4 for Residual Energy-Based Models for End-to-End Speech Recognition
Viaarxiv icon