Alert button
Picture for Julian Chan

Julian Chan

Alert button

On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models

Add code
Bookmark button
Alert button
Jul 09, 2021
Xiaohui Zhang, Vimal Manohar, David Zhang, Frank Zhang, Yangyang Shi, Nayan Singhal, Julian Chan, Fuchun Peng, Yatharth Saraf, Mike Seltzer

Figure 1 for On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models
Figure 2 for On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models
Figure 3 for On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models
Figure 4 for On lattice-free boosted MMI training of HMM and CTC-based full-context ASR models
Viaarxiv icon

Contextualized Streaming End-to-End Speech Recognition with Trie-Based Deep Biasing and Shallow Fusion

Add code
Bookmark button
Alert button
Apr 05, 2021
Duc Le, Mahaveer Jain, Gil Keren, Suyoun Kim, Yangyang Shi, Jay Mahadeokar, Julian Chan, Yuan Shangguan, Christian Fuegen, Ozlem Kalinli, Yatharth Saraf, Michael L. Seltzer

Figure 1 for Contextualized Streaming End-to-End Speech Recognition with Trie-Based Deep Biasing and Shallow Fusion
Figure 2 for Contextualized Streaming End-to-End Speech Recognition with Trie-Based Deep Biasing and Shallow Fusion
Figure 3 for Contextualized Streaming End-to-End Speech Recognition with Trie-Based Deep Biasing and Shallow Fusion
Figure 4 for Contextualized Streaming End-to-End Speech Recognition with Trie-Based Deep Biasing and Shallow Fusion
Viaarxiv icon

Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency

Add code
Bookmark button
Alert button
Apr 05, 2021
Yangyang Shi, Varun Nagaraja, Chunyang Wu, Jay Mahadeokar, Duc Le, Rohit Prabhavalkar, Alex Xiao, Ching-Feng Yeh, Julian Chan, Christian Fuegen, Ozlem Kalinli, Michael L. Seltzer

Figure 1 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Figure 2 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Figure 3 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Figure 4 for Dynamic Encoder Transducer: A Flexible Solution For Trading Off Accuracy For Latency
Viaarxiv icon

Deep Shallow Fusion for RNN-T Personalization

Add code
Bookmark button
Alert button
Nov 16, 2020
Duc Le, Gil Keren, Julian Chan, Jay Mahadeokar, Christian Fuegen, Michael L. Seltzer

Figure 1 for Deep Shallow Fusion for RNN-T Personalization
Figure 2 for Deep Shallow Fusion for RNN-T Personalization
Figure 3 for Deep Shallow Fusion for RNN-T Personalization
Figure 4 for Deep Shallow Fusion for RNN-T Personalization
Viaarxiv icon

Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Nov 03, 2020
Ching-Feng Yeh, Yongqiang Wang, Yangyang Shi, Chunyang Wu, Frank Zhang, Julian Chan, Michael L. Seltzer

Figure 1 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 2 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 3 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Figure 4 for Streaming Attention-Based Models with Augmented Memory for End-to-End Speech Recognition
Viaarxiv icon

Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications

Add code
Bookmark button
Alert button
Oct 29, 2020
Yongqiang Wang, Yangyang Shi, Frank Zhang, Chunyang Wu, Julian Chan, Ching-Feng Yeh, Alex Xiao

Figure 1 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 2 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 3 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Figure 4 for Transformer in action: a comparative study of transformer-based acoustic models for large scale speech recognition applications
Viaarxiv icon

Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition

Add code
Bookmark button
Alert button
Oct 29, 2020
Yangyang Shi, Yongqiang Wang, Chunyang Wu, Ching-Feng Yeh, Julian Chan, Frank Zhang, Duc Le, Mike Seltzer

Figure 1 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 2 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 3 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Figure 4 for Emformer: Efficient Memory Transformer Based Acoustic Model For Low Latency Streaming Speech Recognition
Viaarxiv icon

Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding

Add code
Bookmark button
Alert button
Mar 02, 2018
Anjishnu Kumar, Arpit Gupta, Julian Chan, Sam Tucker, Bjorn Hoffmeister, Markus Dreyer, Stanislav Peshterliev, Ankur Gandhe, Denis Filiminov, Ariya Rastrow, Christian Monson, Agnika Kumar

Figure 1 for Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding
Figure 2 for Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding
Figure 3 for Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding
Figure 4 for Just ASK: Building an Architecture for Extensible Self-Service Spoken Language Understanding
Viaarxiv icon