Alert button
Picture for Zhong-Qiu Wang

Zhong-Qiu Wang

Alert button

SuperME: Supervised and Mixture-to-Mixture Co-Learning for Speech Enhancement and Robust ASR

Add code
Bookmark button
Alert button
Mar 15, 2024
Zhong-Qiu Wang

Figure 1 for SuperME: Supervised and Mixture-to-Mixture Co-Learning for Speech Enhancement and Robust ASR
Figure 2 for SuperME: Supervised and Mixture-to-Mixture Co-Learning for Speech Enhancement and Robust ASR
Figure 3 for SuperME: Supervised and Mixture-to-Mixture Co-Learning for Speech Enhancement and Robust ASR
Figure 4 for SuperME: Supervised and Mixture-to-Mixture Co-Learning for Speech Enhancement and Robust ASR
Viaarxiv icon

Mixture to Mixture: Leveraging Close-talk Mixtures as Weak-supervision for Speech Separation

Add code
Bookmark button
Alert button
Feb 14, 2024
Zhong-Qiu Wang

Viaarxiv icon

USDnet: Unsupervised Speech Dereverberation via Neural Forward Filtering

Add code
Bookmark button
Alert button
Feb 01, 2024
Zhong-Qiu Wang

Viaarxiv icon

Boosting Unknown-number Speaker Separation with Transformer Decoder-based Attractor

Add code
Bookmark button
Alert button
Jan 23, 2024
Younglo Lee, Shukjae Choi, Byeong-Yeol Kim, Zhong-Qiu Wang, Shinji Watanabe

Viaarxiv icon

A Single Speech Enhancement Model Unifying Dereverberation, Denoising, Speaker Counting, Separation, and Extraction

Add code
Bookmark button
Alert button
Oct 12, 2023
Kohei Saijo, Wangyou Zhang, Zhong-Qiu Wang, Shinji Watanabe, Tetsunori Kobayashi, Tetsuji Ogawa

Figure 1 for A Single Speech Enhancement Model Unifying Dereverberation, Denoising, Speaker Counting, Separation, and Extraction
Figure 2 for A Single Speech Enhancement Model Unifying Dereverberation, Denoising, Speaker Counting, Separation, and Extraction
Figure 3 for A Single Speech Enhancement Model Unifying Dereverberation, Denoising, Speaker Counting, Separation, and Extraction
Figure 4 for A Single Speech Enhancement Model Unifying Dereverberation, Denoising, Speaker Counting, Separation, and Extraction
Viaarxiv icon

Toward Universal Speech Enhancement for Diverse Input Conditions

Add code
Bookmark button
Alert button
Sep 29, 2023
Wangyou Zhang, Kohei Saijo, Zhong-Qiu Wang, Shinji Watanabe, Yanmin Qian

Figure 1 for Toward Universal Speech Enhancement for Diverse Input Conditions
Figure 2 for Toward Universal Speech Enhancement for Diverse Input Conditions
Figure 3 for Toward Universal Speech Enhancement for Diverse Input Conditions
Figure 4 for Toward Universal Speech Enhancement for Diverse Input Conditions
Viaarxiv icon

The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction

Add code
Bookmark button
Alert button
Sep 15, 2023
Shilong Wu, Chenxi Wang, Hang Chen, Yusheng Dai, Chenyue Zhang, Ruoyu Wang, Hongbo Lan, Jun Du, Chin-Hui Lee, Jingdong Chen, Shinji Watanabe, Sabato Marco Siniscalchi, Odette Scharenborg, Zhong-Qiu Wang, Jia Pan, Jianqing Gao

Figure 1 for The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction
Figure 2 for The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction
Figure 3 for The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction
Figure 4 for The Multimodal Information Based Speech Processing (MISP) 2023 Challenge: Audio-Visual Target Speaker Extraction
Viaarxiv icon

Exploring the Integration of Speech Separation and Recognition with Self-Supervised Learning Representation

Add code
Bookmark button
Alert button
Jul 23, 2023
Yoshiki Masuyama, Xuankai Chang, Wangyou Zhang, Samuele Cornell, Zhong-Qiu Wang, Nobutaka Ono, Yanmin Qian, Shinji Watanabe

Figure 1 for Exploring the Integration of Speech Separation and Recognition with Self-Supervised Learning Representation
Figure 2 for Exploring the Integration of Speech Separation and Recognition with Self-Supervised Learning Representation
Figure 3 for Exploring the Integration of Speech Separation and Recognition with Self-Supervised Learning Representation
Viaarxiv icon

The CHiME-7 DASR Challenge: Distant Meeting Transcription with Multiple Devices in Diverse Scenarios

Add code
Bookmark button
Alert button
Jul 14, 2023
Samuele Cornell, Matthew Wiesner, Shinji Watanabe, Desh Raj, Xuankai Chang, Paola Garcia, Matthew Maciejewski, Yoshiki Masuyama, Zhong-Qiu Wang, Stefano Squartini, Sanjeev Khudanpur

Figure 1 for The CHiME-7 DASR Challenge: Distant Meeting Transcription with Multiple Devices in Diverse Scenarios
Figure 2 for The CHiME-7 DASR Challenge: Distant Meeting Transcription with Multiple Devices in Diverse Scenarios
Figure 3 for The CHiME-7 DASR Challenge: Distant Meeting Transcription with Multiple Devices in Diverse Scenarios
Figure 4 for The CHiME-7 DASR Challenge: Distant Meeting Transcription with Multiple Devices in Diverse Scenarios
Viaarxiv icon