Alert button
Picture for Chunyang Wu

Chunyang Wu

Alert button

Flexi-Transducer: Optimizing Latency, Accuracy and Compute forMulti-Domain On-Device Scenarios

Add code
Bookmark button
Alert button
Apr 06, 2021
Jay Mahadeokar, Yangyang Shi, Yuan Shangguan, Chunyang Wu, Alex Xiao, Hang Su, Duc Le, Ozlem Kalinli, Christian Fuegen, Michael L. Seltzer

Figure 1 for Flexi-Transducer: Optimizing Latency, Accuracy and Compute forMulti-Domain On-Device Scenarios
Figure 2 for Flexi-Transducer: Optimizing Latency, Accuracy and Compute forMulti-Domain On-Device Scenarios
Figure 3 for Flexi-Transducer: Optimizing Latency, Accuracy and Compute forMulti-Domain On-Device Scenarios
Figure 4 for Flexi-Transducer: Optimizing Latency, Accuracy and Compute forMulti-Domain On-Device Scenarios
Viaarxiv icon

Dissecting User-Perceived Latency of On-Device E2E Speech Recognition

Add code
Bookmark button
Alert button
Apr 06, 2021
Yuan Shangguan, Rohit Prabhavalkar, Hang Su, Jay Mahadeokar, Yangyang Shi, Jiatong Zhou, Chunyang Wu, Duc Le, Ozlem Kalinli, Christian Fuegen, Michael L. Seltzer

Figure 1 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Figure 2 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Figure 3 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Figure 4 for Dissecting User-Perceived Latency of On-Device E2E Speech Recognition
Viaarxiv icon

Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency

Add code
Bookmark button
Alert button
Apr 05, 2021
Yangyang Shi, Varun Nagaraja, Chunyang Wu, Jay Mahadeokar, Duc Le, Rohit Prabhavalkar, Alex Xiao, Ching-Feng Yeh, Julian Chan, Christian Fuegen, Ozlem Kalinli, Michael L. Seltzer

Figure 1 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Figure 2 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Figure 3 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Figure 4 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Viaarxiv icon

Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Nov 03, 2020
Ching-Feng Yeh, Yongqiang Wang, Yangyang Shi, Chunyang Wu, Frank Zhang, Julian Chan, Michael L. Seltzer

Figure 1 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 2 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 3 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 4 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Viaarxiv icon

Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications

Add code
Bookmark button
Alert button
Oct 29, 2020
Yongqiang Wang, Yangyang Shi, Frank Zhang, Chunyang Wu, Julian Chan, Ching-Feng Yeh, Alex Xiao

Figure 1 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 2 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 3 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 4 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Viaarxiv icon

Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition

Add code
Bookmark button
Alert button
Oct 29, 2020
Yangyang Shi, Yongqiang Wang, Chunyang Wu, Ching-Feng Yeh, Julian Chan, Frank Zhang, Duc Le, Mike Seltzer

Figure 1 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 2 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 3 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 4 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Viaarxiv icon

Weak-Attention Suppression For Transformer Based Speech Recognition

Add code
Bookmark button
Alert button
May 18, 2020
Yangyang Shi, Yongqiang Wang, Chunyang Wu, Christian Fuegen, Frank Zhang, Duc Le, Ching-Feng Yeh, Michael L. Seltzer

Figure 1 for Weak-Attention Suppression For Transformer Based Speech Recognition
Figure 2 for Weak-Attention Suppression For Transformer Based Speech Recognition
Figure 3 for Weak-Attention Suppression For Transformer Based Speech Recognition
Figure 4 for Weak-Attention Suppression For Transformer Based Speech Recognition
Viaarxiv icon

Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory

Add code
Bookmark button
Alert button
May 16, 2020
Chunyang Wu, Yongqiang Wang, Yangyang Shi, Ching-Feng Yeh, Frank Zhang

Figure 1 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Figure 2 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Figure 3 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Figure 4 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Viaarxiv icon