Alert button
Picture for Yongqiang Wang

Yongqiang Wang

Alert button

BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning for Automatic Speech Recognition

Add code
Bookmark button
Alert button
Oct 01, 2021
Yu Zhang, Daniel S. Park, Wei Han, James Qin, Anmol Gulati, Joel Shor, Aren Jansen, Yuanzhong Xu, Yanping Huang, Shibo Wang, Zongwei Zhou, Bo Li, Min Ma, William Chan, Jiahui Yu, Yongqiang Wang, Liangliang Cao, Khe Chai Sim, Bhuvana Ramabhadran, Tara N. Sainath, Françoise Beaufays, Zhifeng Chen, Quoc V. Le, Chung-Cheng Chiu, Ruoming Pang, Yonghui Wu

Figure 1 for BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning for Automatic Speech Recognition
Figure 2 for BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning for Automatic Speech Recognition
Figure 3 for BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning for Automatic Speech Recognition
Figure 4 for BigSSL: Exploring the Frontier of Large-Scale Semi-Supervised Learning for Automatic Speech Recognition
Viaarxiv icon

Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Nov 03, 2020
Ching-Feng Yeh, Yongqiang Wang, Yangyang Shi, Chunyang Wu, Frank Zhang, Julian Chan, Michael L. Seltzer

Figure 1 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 2 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 3 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 4 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Viaarxiv icon

Streaming Simultaneous Speech Translation with Augmented Memory Transformer

Add code
Bookmark button
Alert button
Oct 30, 2020
Xutai Ma, Yongqiang Wang, Mohammad Javad Dousti, Philipp Koehn, Juan Pino

Figure 1 for Streaming Simultaneous Speech Translation with Augmented Memory Transformer
Figure 2 for Streaming Simultaneous Speech Translation with Augmented Memory Transformer
Figure 3 for Streaming Simultaneous Speech Translation with Augmented Memory Transformer
Figure 4 for Streaming Simultaneous Speech Translation with Augmented Memory Transformer
Viaarxiv icon

Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications

Add code
Bookmark button
Alert button
Oct 29, 2020
Yongqiang Wang, Yangyang Shi, Frank Zhang, Chunyang Wu, Julian Chan, Ching-Feng Yeh, Alex Xiao

Figure 1 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 2 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 3 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 4 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Viaarxiv icon

Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition

Add code
Bookmark button
Alert button
Oct 29, 2020
Yangyang Shi, Yongqiang Wang, Chunyang Wu, Ching-Feng Yeh, Julian Chan, Frank Zhang, Duc Le, Mike Seltzer

Figure 1 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 2 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 3 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 4 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Viaarxiv icon

Fast, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces

Add code
Bookmark button
Alert button
May 19, 2020
Frank Zhang, Yongqiang Wang, Xiaohui Zhang, Chunxi Liu, Yatharth Saraf, Geoffrey Zweig

Figure 1 for Fast, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces
Figure 2 for Fast, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces
Figure 3 for Fast, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces
Figure 4 for Fast, Simpler and More Accurate Hybrid ASR Systems Using Wordpieces
Viaarxiv icon

Weak-Attention Suppression For Transformer Based Speech Recognition

Add code
Bookmark button
Alert button
May 18, 2020
Yangyang Shi, Yongqiang Wang, Chunyang Wu, Christian Fuegen, Frank Zhang, Duc Le, Ching-Feng Yeh, Michael L. Seltzer

Figure 1 for Weak-Attention Suppression For Transformer Based Speech Recognition
Figure 2 for Weak-Attention Suppression For Transformer Based Speech Recognition
Figure 3 for Weak-Attention Suppression For Transformer Based Speech Recognition
Figure 4 for Weak-Attention Suppression For Transformer Based Speech Recognition
Viaarxiv icon

Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory

Add code
Bookmark button
Alert button
May 16, 2020
Chunyang Wu, Yongqiang Wang, Yangyang Shi, Ching-Feng Yeh, Frank Zhang

Figure 1 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Figure 2 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Figure 3 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Figure 4 for Streaming Transformer-based Acoustic Models Using Self-attention with Augmented Memory
Viaarxiv icon

Improving N-gram Language Models with Pre-trained Deep Transformer

Add code
Bookmark button
Alert button
Nov 22, 2019
Yiren Wang, Hongzhao Huang, Zhe Liu, Yutong Pang, Yongqiang Wang, ChengXiang Zhai, Fuchun Peng

Figure 1 for Improving N-gram Language Models with Pre-trained Deep Transformer
Figure 2 for Improving N-gram Language Models with Pre-trained Deep Transformer
Figure 3 for Improving N-gram Language Models with Pre-trained Deep Transformer
Figure 4 for Improving N-gram Language Models with Pre-trained Deep Transformer
Viaarxiv icon

Transformer-Transducer: End-to-End Speech Recognition with Self-Attention

Add code
Bookmark button
Alert button
Oct 28, 2019
Ching-Feng Yeh, Jay Mahadeokar, Kaustubh Kalgaonkar, Yongqiang Wang, Duc Le, Mahaveer Jain, Kjell Schubert, Christian Fuegen, Michael L. Seltzer

Figure 1 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Figure 2 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Figure 3 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Figure 4 for Transformer-Transducer: End-to-End Speech Recognition with Self-Attention
Viaarxiv icon