Alert button
Picture for Shukjae Choi

Shukjae Choi

Alert button

Boosting Unknown-number Speaker Separation with Transformer Decoder-based Attractor

Add code
Bookmark button
Alert button
Jan 23, 2024
Younglo Lee, Shukjae Choi, Byeong-Yeol Kim, Zhong-Qiu Wang, Shinji Watanabe

Viaarxiv icon

Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks

Add code
Bookmark button
Alert button
Sep 18, 2023
Soumi Maiti, Yifan Peng, Shukjae Choi, Jee-weon Jung, Xuankai Chang, Shinji Watanabe

Figure 1 for Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks
Figure 2 for Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks
Figure 3 for Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks
Figure 4 for Voxtlm: unified decoder-only models for consolidating speech recognition/synthesis and speech/text continuation tasks
Viaarxiv icon

Neural Speech Enhancement with Very Low Algorithmic Latency and Complexity via Integrated Full- and Sub-Band Modeling

Add code
Bookmark button
Alert button
Apr 18, 2023
Zhong-Qiu Wang, Samuele Cornell, Shukjae Choi, Younglo Lee, Byeong-Yeol Kim, Shinji Watanabe

Figure 1 for Neural Speech Enhancement with Very Low Algorithmic Latency and Complexity via Integrated Full- and Sub-Band Modeling
Figure 2 for Neural Speech Enhancement with Very Low Algorithmic Latency and Complexity via Integrated Full- and Sub-Band Modeling
Figure 3 for Neural Speech Enhancement with Very Low Algorithmic Latency and Complexity via Integrated Full- and Sub-Band Modeling
Figure 4 for Neural Speech Enhancement with Very Low Algorithmic Latency and Complexity via Integrated Full- and Sub-Band Modeling
Viaarxiv icon

Joint unsupervised and supervised learning for context-aware language identification

Add code
Bookmark button
Alert button
Apr 14, 2023
Jinseok Park, Hyung Yong Kim, Jihwan Park, Byeong-Yeol Kim, Shukjae Choi, Yunkyu Lim

Figure 1 for Joint unsupervised and supervised learning for context-aware language identification
Figure 2 for Joint unsupervised and supervised learning for context-aware language identification
Figure 3 for Joint unsupervised and supervised learning for context-aware language identification
Figure 4 for Joint unsupervised and supervised learning for context-aware language identification
Viaarxiv icon

TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation

Add code
Bookmark button
Alert button
Nov 22, 2022
Zhong-Qiu Wang, Samuele Cornell, Shukjae Choi, Younglo Lee, Byeong-Yeol Kim, Shinji Watanabe

Figure 1 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Figure 2 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Figure 3 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Figure 4 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Viaarxiv icon

TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation

Add code
Bookmark button
Alert button
Sep 08, 2022
Zhong-Qiu Wang, Samuele Cornell, Shukjae Choi, Younglo Lee, Byeong-Yeol Kim, Shinji Watanabe

Figure 1 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Figure 2 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Figure 3 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Figure 4 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Viaarxiv icon