Alert button
Picture for Scott Wisdom

Scott Wisdom

Alert button

Objective and subjective evaluation of speech enhancement methods in the UDASE task of the 7th CHiME challenge

Feb 02, 2024
Simon Leglaive, Matthieu Fraticelli, Hend ElGhazaly, Léonie Borne, Mostafa Sadeghi, Scott Wisdom, Manuel Pariente, John R. Hershey, Daniel Pressnitzer, Jon P. Barker

Viaarxiv icon

TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition

Aug 21, 2023
Hakan Erdogan, Scott Wisdom, Xuankai Chang, Zalán Borsos, Marco Tagliasacchi, Neil Zeghidour, John R. Hershey

Figure 1 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Figure 2 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Figure 3 for TokenSplit: Using Discrete Speech Representations for Direct, Refined, and Transcript-Conditioned Speech Separation and Recognition
Viaarxiv icon

The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement

Jul 07, 2023
Simon Leglaive, Léonie Borne, Efthymios Tzinis, Mostafa Sadeghi, Matthieu Fraticelli, Scott Wisdom, Manuel Pariente, Daniel Pressnitzer, John R. Hershey

Figure 1 for The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement
Figure 2 for The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement
Figure 3 for The CHiME-7 UDASE task: Unsupervised domain adaptation for conversational speech enhancement
Viaarxiv icon

Unsupervised Multi-channel Separation and Adaptation

May 18, 2023
Cong Han, Kevin Wilson, Scott Wisdom, John R. Hershey

Figure 1 for Unsupervised Multi-channel Separation and Adaptation
Figure 2 for Unsupervised Multi-channel Separation and Adaptation
Figure 3 for Unsupervised Multi-channel Separation and Adaptation
Viaarxiv icon

AudioSlots: A slot-centric generative model for audio separation

May 09, 2023
Pradyumna Reddy, Scott Wisdom, Klaus Greff, John R. Hershey, Thomas Kipf

Figure 1 for AudioSlots: A slot-centric generative model for audio separation
Figure 2 for AudioSlots: A slot-centric generative model for audio separation
Figure 3 for AudioSlots: A slot-centric generative model for audio separation
Viaarxiv icon

AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation

Jul 20, 2022
Efthymios Tzinis, Scott Wisdom, Tal Remez, John R. Hershey

Figure 1 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 2 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 3 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Figure 4 for AudioScopeV2: Audio-Visual Attention Architectures for Calibrated Open-Domain On-Screen Sound Separation
Viaarxiv icon

Distance-Based Sound Separation

Jul 01, 2022
Katharine Patterson, Kevin Wilson, Scott Wisdom, John R. Hershey

Figure 1 for Distance-Based Sound Separation
Figure 2 for Distance-Based Sound Separation
Figure 3 for Distance-Based Sound Separation
Figure 4 for Distance-Based Sound Separation
Viaarxiv icon

Text-Driven Separation of Arbitrary Sounds

Apr 12, 2022
Kevin Kilgour, Beat Gfeller, Qingqing Huang, Aren Jansen, Scott Wisdom, Marco Tagliasacchi

Figure 1 for Text-Driven Separation of Arbitrary Sounds
Figure 2 for Text-Driven Separation of Arbitrary Sounds
Figure 3 for Text-Driven Separation of Arbitrary Sounds
Figure 4 for Text-Driven Separation of Arbitrary Sounds
Viaarxiv icon

CycleGAN-Based Unpaired Speech Dereverberation

Mar 29, 2022
Hannah Muckenhirn, Aleksandr Safin, Hakan Erdogan, Felix de Chaumont Quitry, Marco Tagliasacchi, Scott Wisdom, John R. Hershey

Figure 1 for CycleGAN-Based Unpaired Speech Dereverberation
Figure 2 for CycleGAN-Based Unpaired Speech Dereverberation
Figure 3 for CycleGAN-Based Unpaired Speech Dereverberation
Viaarxiv icon

Adapting Speech Separation to Real-World Meetings Using Mixture Invariant Training

Oct 20, 2021
Aswin Sivaraman, Scott Wisdom, Hakan Erdogan, John R. Hershey

Figure 1 for Adapting Speech Separation to Real-World Meetings Using Mixture Invariant Training
Figure 2 for Adapting Speech Separation to Real-World Meetings Using Mixture Invariant Training
Viaarxiv icon