Alert button
Picture for Peidong Wang

Peidong Wang

Alert button

A Conformer Based Acoustic Model for Robust Automatic Speech Recognition

Add code
Bookmark button
Alert button
Mar 20, 2022
Yufeng Yang, Peidong Wang, DeLiang Wang

Figure 1 for A Conformer Based Acoustic Model for Robust Automatic Speech Recognition
Figure 2 for A Conformer Based Acoustic Model for Robust Automatic Speech Recognition
Figure 3 for A Conformer Based Acoustic Model for Robust Automatic Speech Recognition
Figure 4 for A Conformer Based Acoustic Model for Robust Automatic Speech Recognition
Viaarxiv icon

Predicting Atlantic Multidecadal Variability

Add code
Bookmark button
Alert button
Oct 29, 2021
Glenn Liu, Peidong Wang, Matthew Beveridge, Young-Oh Kwon, Iddo Drori

Figure 1 for Predicting Atlantic Multidecadal Variability
Figure 2 for Predicting Atlantic Multidecadal Variability
Viaarxiv icon

Continuous Speech Separation with Recurrent Selective Attention Network

Add code
Bookmark button
Alert button
Oct 28, 2021
Yixuan Zhang, Zhuo Chen, Jian Wu, Takuya Yoshioka, Peidong Wang, Zhong Meng, Jinyu Li

Figure 1 for Continuous Speech Separation with Recurrent Selective Attention Network
Figure 2 for Continuous Speech Separation with Recurrent Selective Attention Network
Figure 3 for Continuous Speech Separation with Recurrent Selective Attention Network
Figure 4 for Continuous Speech Separation with Recurrent Selective Attention Network
Viaarxiv icon

Efficient End-to-End Speech Recognition Using Performers in Conformers

Add code
Bookmark button
Alert button
Nov 11, 2020
Peidong Wang, DeLiang Wang

Figure 1 for Efficient End-to-End Speech Recognition Using Performers in Conformers
Figure 2 for Efficient End-to-End Speech Recognition Using Performers in Conformers
Figure 3 for Efficient End-to-End Speech Recognition Using Performers in Conformers
Figure 4 for Efficient End-to-End Speech Recognition Using Performers in Conformers
Viaarxiv icon

Multitask Training with Text Data for End-to-End Speech Recognition

Add code
Bookmark button
Alert button
Oct 27, 2020
Peidong Wang, Tara N. Sainath, Ron J. Weiss

Figure 1 for Multitask Training with Text Data for End-to-End Speech Recognition
Figure 2 for Multitask Training with Text Data for End-to-End Speech Recognition
Figure 3 for Multitask Training with Text Data for End-to-End Speech Recognition
Figure 4 for Multitask Training with Text Data for End-to-End Speech Recognition
Viaarxiv icon

Speaker Separation Using Speaker Inventories and Estimated Speech

Add code
Bookmark button
Alert button
Oct 20, 2020
Peidong Wang, Zhuo Chen, DeLiang Wang, Jinyu Li, Yifan Gong

Figure 1 for Speaker Separation Using Speaker Inventories and Estimated Speech
Figure 2 for Speaker Separation Using Speaker Inventories and Estimated Speech
Figure 3 for Speaker Separation Using Speaker Inventories and Estimated Speech
Figure 4 for Speaker Separation Using Speaker Inventories and Estimated Speech
Viaarxiv icon

Multi-microphone Complex Spectral Mapping for Utterance-wise and Continuous Speaker Separation

Add code
Bookmark button
Alert button
Oct 04, 2020
Zhong-Qiu Wang, Peidong Wang, DeLiang Wang

Figure 1 for Multi-microphone Complex Spectral Mapping for Utterance-wise and Continuous Speaker Separation
Figure 2 for Multi-microphone Complex Spectral Mapping for Utterance-wise and Continuous Speaker Separation
Figure 3 for Multi-microphone Complex Spectral Mapping for Utterance-wise and Continuous Speaker Separation
Figure 4 for Multi-microphone Complex Spectral Mapping for Utterance-wise and Continuous Speaker Separation
Viaarxiv icon

Bridging the Gap Between Monaural Speech Enhancement and Recognition with Distortion-Independent Acoustic Modeling

Add code
Bookmark button
Alert button
Mar 13, 2019
Peidong Wang, Ke Tan, DeLiang Wang

Figure 1 for Bridging the Gap Between Monaural Speech Enhancement and Recognition with Distortion-Independent Acoustic Modeling
Figure 2 for Bridging the Gap Between Monaural Speech Enhancement and Recognition with Distortion-Independent Acoustic Modeling
Figure 3 for Bridging the Gap Between Monaural Speech Enhancement and Recognition with Distortion-Independent Acoustic Modeling
Figure 4 for Bridging the Gap Between Monaural Speech Enhancement and Recognition with Distortion-Independent Acoustic Modeling
Viaarxiv icon

Incorporating Language Level Information into Acoustic Models

Add code
Bookmark button
Alert button
Dec 14, 2016
Peidong Wang, Deliang Wang

Figure 1 for Incorporating Language Level Information into Acoustic Models
Viaarxiv icon