Alert button
Picture for Samuele Cornell

Samuele Cornell

Alert button

TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation

Add code
Bookmark button
Alert button
Nov 22, 2022
Zhong-Qiu Wang, Samuele Cornell, Shukjae Choi, Younglo Lee, Byeong-Yeol Kim, Shinji Watanabe

Figure 1 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Figure 2 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Figure 3 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Figure 4 for TF-GridNet: Integrating Full- and Sub-Band Modeling for Speech Separation
Viaarxiv icon

End-to-End Integration of Speech Recognition, Dereverberation, Beamforming, and Self-Supervised Learning Representation

Add code
Bookmark button
Alert button
Oct 19, 2022
Yoshiki Masuyama, Xuankai Chang, Samuele Cornell, Shinji Watanabe, Nobutaka Ono

Figure 1 for End-to-End Integration of Speech Recognition, Dereverberation, Beamforming, and Self-Supervised Learning Representation
Figure 2 for End-to-End Integration of Speech Recognition, Dereverberation, Beamforming, and Self-Supervised Learning Representation
Figure 3 for End-to-End Integration of Speech Recognition, Dereverberation, Beamforming, and Self-Supervised Learning Representation
Figure 4 for End-to-End Integration of Speech Recognition, Dereverberation, Beamforming, and Self-Supervised Learning Representation
Viaarxiv icon

Description and analysis of novelties introduced in DCASE Task 4 2022 on the baseline system

Add code
Bookmark button
Alert button
Oct 14, 2022
Francesca Ronchini, Samuele Cornell, Romain Serizel, Nicolas Turpault, Eduardo Fonseca, Daniel P. W. Ellis

Figure 1 for Description and analysis of novelties introduced in DCASE Task 4 2022 on the baseline system
Figure 2 for Description and analysis of novelties introduced in DCASE Task 4 2022 on the baseline system
Figure 3 for Description and analysis of novelties introduced in DCASE Task 4 2022 on the baseline system
Viaarxiv icon

TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation

Add code
Bookmark button
Alert button
Sep 08, 2022
Zhong-Qiu Wang, Samuele Cornell, Shukjae Choi, Younglo Lee, Byeong-Yeol Kim, Shinji Watanabe

Figure 1 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Figure 2 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Figure 3 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Figure 4 for TF-GridNet: Making Time-Frequency Domain Models Great Again for Monaural Speaker Separation
Viaarxiv icon

ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding

Add code
Bookmark button
Alert button
Jul 19, 2022
Yen-Ju Lu, Xuankai Chang, Chenda Li, Wangyou Zhang, Samuele Cornell, Zhaoheng Ni, Yoshiki Masuyama, Brian Yan, Robin Scheibler, Zhong-Qiu Wang, Yu Tsao, Yanmin Qian, Shinji Watanabe

Figure 1 for ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding
Figure 2 for ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding
Figure 3 for ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding
Figure 4 for ESPnet-SE++: Speech Enhancement for Robust Speech Recognition, Translation, and Understanding
Viaarxiv icon

Resource-Efficient Separation Transformer

Add code
Bookmark button
Alert button
Jun 19, 2022
Cem Subakan, Mirco Ravanelli, Samuele Cornell, Frédéric Lepoutre, François Grondin

Figure 1 for Resource-Efficient Separation Transformer
Figure 2 for Resource-Efficient Separation Transformer
Figure 3 for Resource-Efficient Separation Transformer
Figure 4 for Resource-Efficient Separation Transformer
Viaarxiv icon

Conversational Speech Separation: an Evaluation Study for Streaming Applications

Add code
Bookmark button
Alert button
May 31, 2022
Giovanni Morrone, Samuele Cornell, Enrico Zovato, Alessio Brutti, Stefano Squartini

Figure 1 for Conversational Speech Separation: an Evaluation Study for Streaming Applications
Figure 2 for Conversational Speech Separation: an Evaluation Study for Streaming Applications
Figure 3 for Conversational Speech Separation: an Evaluation Study for Streaming Applications
Figure 4 for Conversational Speech Separation: an Evaluation Study for Streaming Applications
Viaarxiv icon

Leveraging Speech Separation for Conversational Telephone Speaker Diarization

Add code
Bookmark button
Alert button
Apr 05, 2022
Giovanni Morrone, Samuele Cornell, Desh Raj, Enrico Zovato, Alessio Brutti, Stefano Squartini

Figure 1 for Leveraging Speech Separation for Conversational Telephone Speaker Diarization
Figure 2 for Leveraging Speech Separation for Conversational Telephone Speaker Diarization
Figure 3 for Leveraging Speech Separation for Conversational Telephone Speaker Diarization
Figure 4 for Leveraging Speech Separation for Conversational Telephone Speaker Diarization
Viaarxiv icon

Towards Low-distortion Multi-channel Speech Enhancement: The ESPNet-SE Submission to The L3DAS22 Challenge

Add code
Bookmark button
Alert button
Feb 24, 2022
Yen-Ju Lu, Samuele Cornell, Xuankai Chang, Wangyou Zhang, Chenda Li, Zhaoheng Ni, Zhong-Qiu Wang, Shinji Watanabe

Figure 1 for Towards Low-distortion Multi-channel Speech Enhancement: The ESPNet-SE Submission to The L3DAS22 Challenge
Figure 2 for Towards Low-distortion Multi-channel Speech Enhancement: The ESPNet-SE Submission to The L3DAS22 Challenge
Figure 3 for Towards Low-distortion Multi-channel Speech Enhancement: The ESPNet-SE Submission to The L3DAS22 Challenge
Figure 4 for Towards Low-distortion Multi-channel Speech Enhancement: The ESPNet-SE Submission to The L3DAS22 Challenge
Viaarxiv icon

On Using Transformers for Speech-Separation

Add code
Bookmark button
Alert button
Feb 06, 2022
Cem Subakan, Mirco Ravanelli, Samuele Cornell, Francois Grondin, Mirko Bronzi

Figure 1 for On Using Transformers for Speech-Separation
Figure 2 for On Using Transformers for Speech-Separation
Figure 3 for On Using Transformers for Speech-Separation
Figure 4 for On Using Transformers for Speech-Separation
Viaarxiv icon